Interrupt handlers uncovered

What are interrupt handlers or interrupt service routines, in fact?
It is like when someone knocks at your door and it is you who answer.
Hey! Is your dad home? … he asks you. Then you go to your dad to tell him that someone is asking for him, you are interrupting him from his daily activities (eating pop-corn, watching football games, stuff like this).
That guy asking for your dad (who takes place of a CPU) is like a peripheral requesting a service. You are the handler, because you tell your dad which guy dares to disturb him :)s

In any case I think for the majority of embedded systems engineers, whether they work in the applications  design, whether they are testers, this is a very familiar term. Actually many interviewers are asking their candidates if they know how to code interrupt service routines, what are those good for, what is it interrupt latency, what’s the difference between an ISR (interrupt service routine) and a common subroutine.

Usually the interrupt vector table contains many entries generally split into two categories, traps or exceptions and interrupt service routines. First ones are associated with software interrupts, second ones with hardware interrupts and they will make the subject of this post.

Basically external software interrupts are generated by an external (I mean peripherals) piece of hardware which requests some CPU time or some memory (the two commodities in embedded systems, but I suppose in computing in general). They are very important in embedded programming due to the fact that they represent the way in which the system is communicating with external devices. All microcontrollers nowadays are armed with various communication interfaces such as SPI, CAN, I2C, RS232, all of those will be completely useless if interrupt requests made by one of them would be handled in an incorrect manner.

The sensitive part of interrupt handler is that they execute code in privileged mode, in other words one must be aware that a bug hidden within an ISR can cause serious damages to the systems.

Here you may find the definition of an ISR. So let’s review a little bit how are things passing:
A peripheral request access for the CPU (whether a receive buffer within a communication interface is filled, whether a pulse was detected on one of microcontroller’s pins) and in case if interrupts were enabled the interrupt is issued on one of the interrupt request lines that are fed into the microprocessor.
Interrupts can generally be enabled from three places: from the corresponding communication interface or module within the microcontroller, from the interrupt controller (a dedicated module that virtually every microcontroller is supplied with, that can perform some actions over interrupts, as to remap or to mask them) and from the microprocessor.

Typically the “number” sent by the peripheral via the request line is the index in the interrupt vector table that every microprocessor must have. In this way the CPU can discriminate between various types of external interrupts, this request number has to tell the CPU whether it was some data received via SPI or via CAN, or some analog –to-digital conversion was performed.

In this way CPU “knows” how to jump to the correct location in the vector table where all interrupt handlers are mapped.
But before jumping out there it must save the state prior to the interrupt. All this process that occurs between the following two events: an interrupt has been raised by a peripheral and beginning of ISR execution is named “interrupt latency” (so you got your answer for the interview) and is strongly architecture dependent. It is about how able is the hardware (CPU) to handle this switch from the common execution flow to an interrupt state. Do not confuse it with context switch from multitasking management, this has to do only with external interrupts and not with higher priority processes requesting kernel access.

So reached the right moment in time where interrupt handlers need to do their job. Program execution flow was interrupted, the state prior to entering the ISR was saved (parameters and local variables were saved on the stack, program counter is pointing to the right index in the interrupt vector table and link register is updated with the address where program execution will continue after returning from interrupt).

Some good practices in writing interrupt service routines, which are not mandatory and are machine-dependent, are the following:

    – it is said that a rule of thumb when coding interrupt service routines is that those have to be short, as short as possible (good news :)); in fact they execute critical code, in privileged mode, so it is easier to be error-prone (it is like playing with your linux kernel sources or making faulty setting in the BIOS)
    – another important thing is that you will have to disable interrupts when entering the ISR; here some comments need to be added: in case if program disables interrupts the handler will last longer, so we will have to count this as drawback in our design, and then, this is good if you want to avoid other events, having a higher priority to interrupt the interrupt, completely turning away the program execution flow. In case if corresponding CPU supports interrupt nesting this won’t be a problem.
    – an interrupt handler has to do is to read the interrupt flags, it has to ensure that those are set properly
    – there are two different approaches used in clearing the flags: either you clear them at the beginning of ISR, either you clear them at the end. If those are cleared at the beginning you avoid triggering once again the same interrupt (there are cases in which CPU, after an interrupt has been triggered and the program reached the handler, sees that the flags are still set, so it triggers once again an interrupt, which you, for sure, do not want to happen)

Other ways of handling peripheral requests besides the interrupts, are DMA requests and polling. Latter is trivial. Program does not continue its normal execution flow but is polling for a certain event. It is, for example, the case of an A/D conversion. When using interrupts, program continues its normal execution flow and it is informed by an interrupt when conversion occurred. When using the polling mechanism program execution is focused on waiting for the conversion flag to be set. This alternative is much slower and does not support multitasking (execution is blocked one single task- waiting indefinitely for an event to occur), interrupts do not have to be enabled in this case.

This is the case when dad periodically phones that guy (the one who knocked at the door) if he needs him.

DMA (direct memory access) – as the name itself tells it, is when a peripheral (usually a fast one like hard-disk or display) initialize a DMA request to the CPU, informing it that it intensively make read/writes in RAM. After the CPU approves this request there is no direct connection between peripheral and CPU, the peripheral directly communicates with RAM.

This is the case when the guy knocks at the door and tells you to leave to door wide open because he will move a lot of things back and forth. He will disturb your dad only once.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: