Is it good to use dynamic memory allocation in embedded programming?

Before starting I have to mention few references which were really usefull resources in my “Embedded tips and tricks” – related posts. One of them is Dan Saks’ articles and columns on Embedded.com and another one is Netrino web site and Michael Barrs’ articles.

I thought of writing this post in order to explain possible benefits but also the drawbacks that are induced by dynamic memory allocation… in embedded programming. I excluded any faulty use of dynamic allocation such as memory leaks or dangling pointers, so my assumption is that data is correctly allocated and de-allocated.

First of all let me just integrate dynamic allocation in the general storage types classification, there are three types (at least in C):

    Static storage – variables which exist during the whole program execution time; those are global variables and those variables explicitly declared as static

    Automatic storage
    – variables which are allocated upon entrance into a block (something delimited by curly brackets {}) and de-allocated after existing that block (this is the case of local variables)

    Dynamic storage
    – this one falls entirely on users’ responsibility; he manually creates the variables (via malloc, calloc or, in C++ new operator) and he also destroys them (via free or delete)

Dynamic allocation has the advantage of being completely in control of the programmer. Variables storage and lifetime is manipulated solely by the user and it is not necessary to overload the compiler with such task (actually dynamic allocation is done at run-time, so has nothing to do with compiler), but on the other hand he has to be aware that this “freedom” can provoke serious errors.

In embedded programming dynamic memory allocation should not be avoided at all, this is not a good practice, but instead it has to be used carefully.

There are two fundamental parameters in which dynamic allocation differs from its counterparts (static and automatic): speed and space. In terms of space, dynamic storage is more efficient than other two, it better avoids memory fragmentation.

In fact what is fragmentation and how can be avoided? The main challenge in static and automatic allocations for embedded systems is that memory is never enough. The compiler has to handle different types of data and it has to predict the maximum size of data being allocated because, at the end, it will be the one taking care of de-allocation.

In terms of time, static and automatic allocation does not cost too much, usually data is added and removed in a LIFO manner. Automatic (local) variables are freed when corresponding function or block exists, so the task of de-allocating is also fast.

Dynamic memory storage prevents fragmentation. It just has to find a contiguous place in memory, usually referred as heap or pool, and it just adds and removes data from there. Here also the number of allocations has to be balanced by the de-allocation, if a program starts to dynamically allocate large arrays, memory may overflow before corresponding de-allocations take place.

The biggest issue that dynamic memory allocation raises is concerning its execution time.


Common implementations of functions as malloc, calloc or of the new operator may vary in execution time, meaning that their response is non-deterministic, there is no upper-bound of the duration of the dynamic memory allocation. This can have a serious impact on the behavior of ROTSes, where deterministic time responses are a must-have. (actually this is indeed the definition of an RTOS – an operating system where you for sure that an event, like a context switch, a system call, an interrupt latency, is execution in a fixed time slot, whether this will take 1us, one minute or one day, but you can definitely rely on this time threshold).

Currently I am testing a TMS470x microcontroller from Texas Instruments based on a CortexM3 core and it wasn’t really the case to  use  dynamically allocation so far. I was lucky that I never been constrained about hardware resources, actually tests were too simple too eat-up several tens of Kilobytes, but as an exercise I will try this out and share the results. It will be intersting to find out how does the compiler translates into assembler a dynamic allocation, as for static and automatic this is simple and straightforward, they are represented as a store instruction.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: