Embedded Questions Answers
Embedded Questions Answers
Embedded Questions Answers
Hardware Interrupt A hardware interrupt is a signal which can tell the CPU
that something happen in hardware device, and should be immediately responded. A interrupt causes the processor to save its state of execution and begin execution of an interrupt service routine.
Software Interrupt Software interrupt is an instruction which cause a
context switch to an interrupt handler similar to a hardware interrupt. Usually it is an interrupt generated within a processor by executing an instruction. Software interrupts are often used to implement system calls. Interrupt Latency
when an interrupt fires, the microprocessor executes an interrupt service
routine (ISR). The amount of time that elapses between a device interrupt request and the first instruction of the corresponding ISR is known as interrupt latency. Interrupt Service Routine
A single ISR (single interrupt) may be triggered by multiple source, then the ISR must include a condition test to determine which source causes this interrupt. Moreover, if multiple source trigger the interrupt at the same time, the ISR must detect it, or it will result in interrupt missing.
Difference between ISR and normal function 1) ISR has no parameter 2) ISR has no return value 3) ISR is asynchronous to instruction flow 4) ISR is triggered by hardware event 5) ISR can't call blocking function or non-reentrant function, such as printf() 6) ISR should be as short as possible, shouldn't call function which require long
time to execute, such as printf(), floating point function 7) ISR can't be blocked, can't wait on semephor, but can signal A reentrant function is one that can be used by more than one task concurrently without fear of data corruption. Conversely, a non-reentrant function is one that cannot be shared by more than one task unless mutual exclusion to the function is ensured either by using a semaphore or by disabling interrupts during critical sections of code. A reentrant function can be interrupted at any time and resumed at a later time without loss of data. Reentrant functions either use local variables or protect their data when global variables are used. A reentrant function: 1) Does not hold static data over successive calls 2) Does not return a pointer to static data; all data is provided by the caller of the function 3) Uses local data or ensures protection of global data by making a local copy of it 4) Must not call any non-reentrant functions Drawbacks of Dynamic Memory Allocation 1) It might cause memory leak when a programmer heedlessly forgets to free unused memory. 2) It might cause memory segmentation when we allocate many variables with various size. 3) It costs longer time to allocate memory than local and static memory, because operating system needs to search for free space in heap section to allocate the data. Kick Starting Consider a interrupt-driven input routine which reads in data from an IO port of an input device, processes it, and writes it into a FIFO queue. There is a interrupt-driven output routine which tries to fetch the data from the FIFO
queue, and write this processed data to a IO port of an output device. It will be triggered when output device's status changes from busy to ready. Therefore, it creates two problems:
When the input ISR write the first data into the FIFO queue, the output device's status is already ready. Therefore, the output ISR will not be triggered. If somehow the output device's status changes from busy to ready, so CPU executes the the output ISR. However, the processed data is not ready (queue is empty), so the output device's status remain on ready. Therefore, the output ISR will not be triggered anymore.
Solution To solve these problems, we can use kick starting technique. To do it, we write an output routine (not ISR) to write data to the IO port of the output device, and use a device busy flag to determine whether we should kick start the output routine.
The output routine first verifies whether the data is enqueued, if yes, it outputs data, and set busy flag to true; if not, it does nothing and returns. The input ISR test the busy flag, if it is false, then call the output routine (kick start); if it is true, then don't call the output routine (kick start). After that, it returns. The output ISR first clears the busy flag, and calls the routine. Therefore, if queue is empty, output routine does nothing, then busy flag remains false, so input ISR will kick start when it enqueue data. After that, it returns.
Major Concerns of selecting RTOS 1) Interuppt latency 2) Footprint (size of the executable which is generated after compiling)
3) Context switching time is also considered as vital element in selection Linux and Real Time Linux is built as a general-purpose multiuser operating system. General-purpose operating systems are tuned to maximize average throughput even at the expense of latency, while real-time operating systems attempt to minimize, and place an upper bound on, latency, sometimes at the expense of average throughput. There are several reasons why standard Linux is not suitable for real-time use:
Non-preemptive kernel procedure This is a fancy way of saying that kernel system calls are not preemptible. Once a process enters the kernel, it cant be preempted until its ready to exit the kernel. If an event occurs while the kernel is executing, the process waiting for that event cant be scheduled until the currently executing process exits the kernel. Some kernel calls, fork() for example, can hold off preemption for tens of milliseconds. Paging The process of swapping pages in and out of virtual memory is, for all practical purposes, unbounded. We have no way of knowing how long it will take to get a page off a disk drive and so we simply cant place an upper bound on the time a process may be delayed due to a page fault. Fairness in Scheduling Reflecting its Unix heritage as a multi-user timesharing system, the conventional Linux scheduler does its best to be fair to all processes. Thus, the scheduler may give the processor to a low-priority process that has been waiting a long time even though a higher-priority process is ready to run. Request Reordering Linux reorders I/O requests from multiple processes to make more efficient use of hardware. For example, hard disk block reads from a lower priority process may be given precedence over read requests from a higher priority process in order to minimize disk head movement or improve chances of error recovery.
Batching Linux will batch operations to make more efficient use of resources. For example, instead of freeing one page at a time when memory gets tight, Linux will run through the list of pages, clearing out as many as possible, delaying the execution of all processes.
Priority Inverse
Bounded Priority Inverse: happen when higher priority process is waiting for resource (ex. critical section protected by mutex and semaphor) which is currently used by lower priority process. High priority process need to wait until low process pass the critical section. Unbounded Priority Inverse: happen when a middle priority process further preempty the low priority process, therefore the high priority process need to further wait for the middle priority process.
Solutions
Priority Inheritance Protocol: when low priority process accesses the resource, boost its priority to be the same as the highest priority process which is waiting for the resource. This is not very practical because we need to track all processes which are waiting for semaphor at run time. Priority Ceiling Protocol: when low priority process accesses the resource, boost its priority to a predefined level which must be higher than all processes which are possible to access the critical section. Therefore we don't need to track what processes are waiting for resource at run time. The disadvantage is that we need to predefine the level before run time.
Harvard-vonnewmanDifference With a Harvard architecture, there are two separate memory space, one for instruction and one for data. We can increase the throughput because when we are executing one instruction, we can be fetching the next instruction. Other advantage is that, we can have different width on instruction bus and data bus.
RISCvs CISC The major difference is that RISC has fewer number of instructions and each of them performs simpler function, has simpler format, and most instructions have the same number of clock cycles, so we can apply pipeline technique to improve performance. On the other hand, CISC processors have a large amount of different, complex instructions, and each of them may require various number of clock cycles and contain various number of bits. RISC processor has better performance, but CISC processor can has code which has much less lines than RISC processor. Moreover, CISC typically requires more hardware resource to support its powerful instructions, therefore also has higher power consumption. For example, CISC may has a dedicated hardware for multiplication, while RISC may combine shift and add instruction to achieve multiplication. Frame Pointer While the subroutine is active, the frame pointer, points at the top of the stack. (Remember, our stacks grow downward, so in the picture $fp is correctly pointing at the last word that was pushed onto the stack, the top of the stack.) But the stack (and the stack pointer) may be involved in arithmetic expression evaluation. This often involves pushing and popping values onto and off of the stack. If $sp keeps changing, it would be hard to access a variable at a fixed location on the stack. To make things easy for compilers (and for human assembly language programmers) it is convenient to have a frame pointer that does not change its value while a subroutine is active. The variables will always be the same distance from the unchanging frame pointer. In the subroutine prolog, the caller's frame pointer is pushed onto the stack along with the stack pointer and any S registers. Now the subroutine makes room on the stack for variables and points the frame pointer to the top of the stack frame.