Difference between Microprocessor and Microcontroller
Let's make one thing very clear before jumping to comparison that Microprocessor and microcontroller are different and not designed to replace each other. Understanding internal working of microprocessor and microcontroller is essential for embedded programmer.
Harvard vs Von Neumann architecture
Harvard Von Nuemann Harvard architecture has physically separate program and data memory and separate buses to access the same. Von Nuemann architecture has single physical memory for program and data as well as single bus to access the same With harvard architecture processor can access program and data memory at the same time which can help to perform faster program execution In Von Nuemann architecture either program or data can be accessed at a time. With harvard architecture it is possible to have different address and data bus width for program and data memory With Von Nuemann architecture since only single bus is available, bus characteristics are same for program and data. HW design of this architecture is complex and expensive. HW design for this architecture is simple and cost efficient. Since program and data are different programs are prone to crash caused by program error itself. Since program and data share the same memory, a program error can rewrite the instruction and result into undesirable behavior or crash. Microchip based PIC microcontroller Pentium, Motorola 68HC11
What is modified harvard architecture? provide an example of processor based on this architecture.
It is Harvard architecture with modification. Modification is that program and data memory are not separate physically still both can be accessed simultaneously, this is achieved by having separate cache for data and memory.
As long as program and data are being accessed from cache, it acts as Harvard architecture, when cache does not have required information then program and data are accessed from physical memory sequentially as in Von Neumann architecture.
Modified Harvard architecture combines benefit of both Von Neumann and Harvard architecture, it can access data and program simultaneously as in Harvard architecture, it can access program instruction as if data is accessed which is very useful (e.g. for writing self-modifying codes).
Examples are Cortex M3 based STM32, ATMEL AVR micro controllers.
What is instruction pipelining? explain.
Instruction pipelining is mechanism used in CPU to increase overall instruction execution throughput. In more simpler words, it will increase the number of instructions that can be executed in given time period. Instead of executing each instruction one by one in sequential order, each instruction execution is divided in multiple stages. Stages for multiple instructions are executed in parallel to increase the execution throughput. For a general four stage pipelining each instruction is divided into four stages as below
- Fetch = Fetch instruction from program memory
- Decode = Decode the instruction opcode
- Execute = Perform operation
- Write Back = Write operated data to data memory/registers.
See below image for generic four stage pipeline.
What is wait state? How to avoid it?
Wait state is microprocessor clock cycle(s) during which no action occurs at all, CPU is simply waiting for some event. For instruction execution microprocessor needs to access external memory, device etc. Compare to CPU, memory and devices are slower and hence while accessing it CPU must wait and this results into complete waste of processor clock cycle(s).
Wait state time could be reduced but can’t be eliminated completely. It can be reduced using cache/on chip memory instead of external memory, also instruction pipelining can be used to reduce it.
Memory mapped I/O vs I/O mapped I/O.
I/O are input and output device. Memory mapped and I/O mapped I/O are two separate mechanism to access I/O devices by CPU.
In case of memory mapped I/O, both memory and I/O devices are mapped on to same address space. Addresses of total address spaces are reserved for memory and I/O device. I/O devices can be accessed as if accessing normal data memory. Normal memory access instructions can be used to accessed I/O devices. In this case, often full memory space cannot be utilized for data storage as part of it is reserved/utilized for addressing I/O devices. For this type of architecture address decoder circuitry requires to decode entire address space to distinguish data address or device address and different devices. For example, for 16-bit address bus 16-bit address decoder will be required. Since I/O devices are slower compare to memory, with this type of architecture overall memory access by CPU would be slower.
In case of I/O mapped I/O (or port mapped I/O), I/O devices are mapped on separate address space and memory is mapped on separate address space that’s why this type of mechanism is also often referred as isolated I/O. Special instructions (example IN, OUT) are used to access I/O devices. In this architecture, full memory space can be utilized for data storage as memory and I/Os are addressed in separate address space. If address bus for I/O device is 8 bit wide then only 8-bit address decoder circuitry would be required.
What is interrupt? explain interrupt execution sequence.
Interrupt is asynchronous signal from SW or HW to processor that an event/service request needs to be looked at urgently. When an interrupt occurs, microprocessor suspends current execution and jumps to interrupt service routine location and executes it and again resumes the previous suspended execution.
Below is general sequence, when a normal program execution is interrupted by SW or HW.
Interrupt Execution Sequence Microprocessor completes the execution of current program instruction Microprocessor acknowledges interrupt request and stores status of all the interrupt internally. Microprocessor stores program status and program counter registers onto stack. Microprocessor loads program counter with address of interrupt service routine to be executed and starts its execution. At the beginning of interrupt service routine, microprocessor first encounters and executes program instructions meant to store copy of current value of general-purpose registers. This is done to ensure that value of these register can be restored to previous state after manipulation by ISR. It is job of interrupt service routine to make sure that very first piece of ISR code has program instructions to push these registers on to stack. Moving ahead on ISR code execution, microprocessor encounters and executes program instructions meant to sever the interrupt request. Next, microprocessor encounters and executes program instructions meant to restore current value of general purpose register from stack. Again, this is job of interrupt service routine that last piece of ISR code (before RETI) has instructions to pop these registers from stack. At the end of ISR routine microprocessor encounters RETI instruction which restores old value of program counter and program status registers from stack. Microprocessor begins program execution from new value of program counter. It is worth mentioning here that RETI instruction also signals that current interrupt request has been served and microprocessor is ready to accept new interrupt request.
What is difference between RET and RETI?
RET is used to return from normal sub routine and RETI is used to return from interrupt service routine.
Fundamentally, both instructions perform the same job of restoring program counter with return address from stack.
RETI (return from interrupt) instruction additionally clears the interrupt service flag which indicates that interrupt has been served and processor is ready to accept new interrupt request for the interrupt just has been served.
What is interrupt latency? explain.
Interrupt latency is interrupt response time of processor. To be precise, it is time period between the moment interrupt has occurred and processor started serving it.
Impact of interrupt latency on real time embedded system: real time embedded system often requires to control outputs/actuators based on Interrupt event with minimum or no delay, failing to do so timely can have undesirable/damaging impact on system depending on the type of system it is.
Factors impacting interrupt latency: Interrupt latency depends on both software and hardware. It mainly depends on architecture of processor, interrupt controller and operating system (program) used to handle the interrupt. Interrupt latency added due to processor architecture and interrupt controller are generally fixed and often specified by device manufacturer as minimum interrupt latency in device specification manual(datasheet). Interrupt latency added due to software/operating system may not be fixed and may be different every time a interrupt is served. Many time OS disables interrupts to execute critical section of code and re-enables once the execution is complete. If an interrupt occurs when it is disabled by OS its processing is delayed until interrupt is re-enabled by OS.
How to minimize the interrupt latency: Interrupt handling software must make sure that interrupt is not disabled unnecessarily, frequently and for considerable execution time length. Also, high priority interrupts can delay the processing of low priority interrupts hence as a thumb rule, processing inside ISR routine should always be as short as possible.
What is NMI(Non-Maskable Interrupt)?
NMI Interrupt cannot be disabled by program/software. NMI cannot be interrupted by any other interrupt or in other words it is highest priority interrupt that microprocessor has.
Microprocessor must accept and serve when a NMI occurs.
NMI is used to notify microprocessor about non-recoverable hardware failure/error and system reset. Below are some example situation which could trigger NMI.
- Program tries to access invalid memory location.
- RAM/ROM corruption is detected.
- BUS error
- Unaligned memory access
- System reset
NMI handler could be used by programmer to analyse processor memory and fix the faulty code which caused NMI.
Next question coming soon ;-)