Existential Questions On The CPU
Existential Questions On The CPU
Existential Questions On The CPU
After the execution of an instruction, the entire process repeats, with the next instruction cycle
normally fetching the next-in-sequence instruction because of the incremented value in the program
counter. If a jump instruction was executed, the program counter will be modified to contain the
address of the instruction that was jumped to and program execution continues normally. In more
complex CPUs, multiple instructions can be fetched, decoded and executed simultaneously. This
section describes what is generally referred to as the "classic RISC pipeline", which is quite common
among the simple CPUs used in many electronic devices (often called microcontroller). It largely
ignores the important role of CPU cache, and therefore the access stage of the pipeline.
Some instructions manipulate the program counter rather than producing result data directly; such
instructions are generally called "jumps" and facilitate program behavior like loops, conditional
program execution (through the use of a conditional jump), and existence of functions. In some
processors, some other instructions change the state of bits in a "flags" register. These flags can be
used to influence how a program behaves, since they often indicate the outcome of various
operations. For example, in such processors a "compare" instruction evaluates two values and sets
or clears bits in the flags register to indicate which one is greater or whether they are equal; one of
these flags could then be used by a later jump instruction to determine program flow.
HOW CAN I IMPROVE THE CPU PERFORMANCE?
eliminate the factors that hinder the CPU – example - use of cache memory
simple structures – no longer possible on today's processors
increase clock frequency – limited by technological issues
parallel execution of instructions
Techniques:
• Pipeline structure
• Multiply the execution units
• Branch prediction
• Speculative execution
• Predication
• Out-of-order execution
• Register renaming
• Hyperthreading
• RISC architecture
WHAT IS LATENCY?
The number of clock cycles necessary for executing one instruction – given by the number of stages.
WHAT IS THROUGHPUT?
The number instruction finished per clock cycle – in theory equal to 1, in practice - smaller (because
of dependencies).
WHAT IS BTB?
Branch Target Buffer. Tag + prediction. Jump condition is true - the counter increases. Jump
condition is false - the counter decreases. Value range - between 0 (00) and 3 (11).
Implementation: state coding:
– strong hit - 11
– weak hit – 10
– weak miss - 01
– strong miss - 00
WHAT IS PREDICATION?
Predication is an architectural feature that provides an alternative to conditional transfer
of control, implemented by machine instructions such as conditional branch, conditional call,
conditional return, and branch tables. Predication works by executing instructions from both
paths of the branch and only permitting those instructions from the taken path to modify
architectural state. The instructions from the taken path are permitted to modify architectural
state because they have been associated (predicated) with a predicate, a Boolean value used by
the instruction to control whether the instruction is allowed to modify the architectural state or not.
It is similar to the speculative execution. In short, an instruction produces effects if and only if the
associated predicate is true.
Write-through: write is done synchronously both to the cache and to the backing store.
Write-back (also called write-behind): initially, writing is done only to the cache. The write to
the backing store is postponed until the modified content is about to be replaced by another
cache block.
Both write-through and write-back policies can use either of these write-miss policies, but usually
they are paired in this way:
A write-back cache uses write allocate, hoping for subsequent writes (or even reads) to
the same location, which is now cached.
A write-through cache uses no-write allocate. Here, subsequent writes have no
advantage, since they still need to be written directly to the backing store.
WHAT IS AN INTERRUPT?
An interrupt is a signal to the processor emitted by hardware or software indicating an event that
needs immediate attention. An interrupt alerts the processor to a high-priority condition requiring
the interruption of the current code the processor is executing. The processor responds by
suspending its current activities, saving its state, and executing a function called an interrupt
handler (or an interrupt service routine, ISR) to deal with the event. This interruption is
temporary, and, after the interrupt handler finishes, the processor resumes normal activities.
Interrupts can be either hardware, software or just traps. Hardware interrupts can be either
maskable (depend on the IF) or non-maskable.
WHAT IS A THREAD?
A thread of execution is the smallest sequence of programmed instructions that can be managed
independently by a scheduler, which is typically a part of the operating system. The
implementation of threads and processes differs between operating systems, but in most cases a
thread is a component of a process. Multiple threads can exist within one process,
executing concurrently and sharing resources such as memory, while different processes do not
share these resources. In particular, the threads of a process share its executable code and the
values of its dynamically allocated variables and non-thread-local global variables at any given
time.
WHAT IS A DESCRIPTOR?
Segment descriptor - data structure for managing a segment. Access to a segment is based on the
index in the descriptor table (selector).
WHAT IS A LINKER?
A linker or link editor is a computer utility program that takes one or more object files generated
by a compiler and combines them into a single executable file, library file, or another 'object' file.