Software-in-the-loop and hardware-in-the-loop simulations (or simply SILS and HILS)

Since I am working in the simulation field, or at least in a team which has simulation related tasks, I thought necessary to say some words about those testing methodologies which, in my opinion, are crucial for a test engineer.

So … if you are a test engineer and things like block-box testing, white-box testing, design under test, software-in-the-loop, hardware-in-the-loop simulation, verification vs validation (what is the difference between them), test automation, sanity testing, smoke testing and few others … are not familiar to you … this is definitely NOT A GOOD THING. I hope my workmates won’t swear me when reading this ūüôā

Before starting I think it is worthwhile to answer some basic questions: What is simulation about and why is it necessary? As far as I know from my personal work experience but also from digging for information on google, simulation is good to speed the software development lifecycle and implicitly to reduce project’s overall costs. In embedded systems you can test either software, either hardware. One way would be to deploy the corresponding embedded software and to see it at work in its environment. For example if you developed some anti-lock braking software you can wait for the ECU controller to come out of the production line, then to deploy it and finally to see it at work when the ECU is placed on the car.

For sure this is a very costly procedure, you have to wait for the environment to be ready (ECU controller, ECU network within the car, the car) and only after this you can begin checking if your software really does what was designed for. On the other hand you could be a hardware engineer and you would like to the test the ECU (if it correctly reads the sensors, if correctly activates the actuators, mainly electrical tests). It would be very difficult for you to wait for a car prototype to be ready and to test the ECU directly on car. It would be very helpful, for the software as well as for the hardware tester to have a platform to reproduce the physical environment where your Design Under Test will be implemented.

Read more of this post

Advertisements

Did you know about MISRA C guidelines?

YES !

I am talking to you guys working in the automotive field.

MISRA comes from Motor Industry Software Reliability Association and its main purpose is to provide guidelines to the automotive industry in order to create safe and reliable software.

If you google a little bit you will find something like this:

Currently MISRA guidelines are produced for C and C++ programming languages only.¬†MISRA C is a software development standard for the C programming language developed by MISRA.¬†¬†Its aims are to facilitate code safety, portability and reliability in the context of embedded systems, specifically those systems programmed in ISO C. There is also a set of guidelines for MISRA C++.¬†¬†MISRA-C:1998 had 127 rules, of which 93 were required and 34 were advisory; the rules were numbered in sequence from 1 to 127. The MISRA-C:2004 document contains 141 rules, of which 121 are “required” and 20 are “advisory”; they are divided into 21 topical categories, from “Environment” to “Run-time failures”.¬†MISRA C++ was launched on March 2008.

.. anyway you can find here a very professional but quite skeptical opinion about those rules.

I will just go through some of MISRA C guidelines which popped to my attention:

Rule 9: Comments should not be nested.

I remember many times my Code Composer compiler issued an warning because of my nested comments.

Read more of this post