Computer Architecture Unit 1
Computer Architecture Unit 1
Computer Architecture Unit 1
The input unit provides external data sources to the computer system. Therefore, it
connects the external environment to the computer. It receives information from input
devices, translates it to machine language, and then inserts it within the computer system.
The keyboard, mouse, or other input devices are the most often utilized and have
corresponding hardware drivers that allow them to work in sync with the rest of the
computer architecture.
The output unit delivers the computer process’s results to the user. A majority of the output
data comprises music, graphics, or video. A computer architecture’s output devices encompass
the display, printing unit, speakers, headphones, etc.
To play an MP3 file, for instance, the system reads a number array from the disc and into
memory. The computer architecture manipulates these numbers to convert compressed audio
data to uncompressed audio data and then outputs the resulting set of numbers
(uncompressed audio file) to the audio chips. The chip then makes it user-ready through the
output unit and associated peripherals.
3. Storage unit/memory
The storage unit contains numerous computer parts that are employed to store data. It is
typically separated into primary storage and secondary storage.
Unit:-1
Objective:
Determine the frequent case.
Determine how much improvement in performance is possible by making it faster.
Amdahl's law can be used to quantify the latter given that we have information
concerning the former.
Quantitative Computer Design
Amdahl's Law :
o The performance improvement to be gained from using some
faster mode of execution is limited by the fraction of the time the
faster mode can be used.
o Two factors:
Unit:-1
Fraction enhanced : Fraction of compute time in original machine that can be converted to
take advantage of the enhancement.
Always <= 1.
Speedup enhanced : Improvement gained by enhanced execution mode:
Key adv: It is often possible to measure the constituent parts of the CPU
performance eq., unlike the components of Amdahl's eq.
Fallacies and Pitfalls
MIPS (million instruction per second) is NOT an alternative metric to time.
The implication: the bigger the MIPS, the faster the machine.
Benchmark programs should be derived from how actual applications will execute. However,
performance is often the result of combined characteristics of a given computer architecture
and system software/hardware components in addition to the microprocessor. Other factors
such as the operating system, compilers, libraries, memory design and I/O subsystem
characteristics may also have impacts on the results and make comparisons difficult.
1. The speed measure - which measures how fast a computer completes a single task. For
example, the SPECint95 is used for comparing the ability of a computer to complete
single tasks.
Unit:-1
2. The throughput measure - which measures how many tasks a computer can complete in
a certain amount of time. The SPECint_rate95 measures the rate of a machine carrying
out a number of tasks.
There are three important guidelines to remember when interpreting benchmark results:
1. Be aware of what is being measured. When making critical purchasing decisions based on
results from standard benchmarks, it is very important to know what is actually been
measured. Without knowing, it is difficult to know whether the measurements obtained is even
relevant to the applications which will run on the system being purchased. Questions to
consider are: does the benchmark measure the overall performance of the system or just
components of the system such as the CPU or memory?
2. Representativeness is key. How close is the benchmark to the actual application being
executed? The closer it is, the better it will be at predicting the performance. For example, a
component-level benchmark would not be good predictors of performance for an application
that would use the entire system. Likewise, application benchmarks would be the most
accurate predictors of performance for individual applications.
3. Avoid single-measure metrics. Application performance should not be measured with just a
single number. No single numerical measurement can completely describe the performance of
a complex device like the CPU or the entire system. Also, try to avoid benchmarks that average
several results into a single measurement. Important information may be lost in average values.
Try to evaluate all the results from different benchmarks that are relevant to the application.
This may give a more accurate picture than evaluating the results from one benchmark alone.
There are some points to remember when reporting results obtained from running
benchmarks.
Use newer version over the older. If an updated and revised version of a benchmark
suite is available, it is usually preferred over the outdated one. Generally there are good
reasons for revising the original. They include, but not limited to, changes in technology,
improvements in compiler efficiency, etc.
Use all programs in a suite. There may be legitimate reasons why only a subset was
used, but they should be explained. Otherwise, someone looking at the results may
Unit:-1
become suspicious as to why the other programs were not considered. Explain about
the selection process, why it was not arbitrary, and why it was useful to do so.
Report compilation mode. The compilation mode that was used is important and should
be reported in every case. The effect of a certain new hardware feature may be
dependent on whether it is applied to optimzed or unoptimized programs.
Use a variety of benchmarks when reporting performance. Generally it is a good idea to
use other set of programs as additional test cases. One set of benchmarks may behave
differently than another set and such observations may be useful as to the next round of
benchmark selections.
List all factors affecting performance. Have enough information about performance
measurements to allow readers to duplicate the results. These include:
1. program input
2. version of the program
3. version of compiler
4. optimizing level of compiled code
5. version of operating system
6. amount of main memory
7. number and types of disks
8. version of the CPU
What is Pipelining?
Pipeliningistheprocessofaccumulatinginstructionfromtheprocessorthrough
a pipeline. It allows storing and executing instructions in an orderly
process. It is also known as pipeline processing.
Pipeline system is like the modern day assembly line setup in factories. For
example in a car manufacturing industry, huge assembly in easer setup and at
each point, there are robotic arms to perform a certain task, and then the car
moves on ahead to the next arm.
Types of Pipeline
Unit:-1
Itisdividedinto2categories:
1. Arithmetic Pipeline
2. Instruction Pipeline
Arithmetic Pipeline
Arithmeticpipelinesareusuallyfoundinmostofthecomputers.Theyareusedfor
floating point operations, multiplication of fixed point numbers etc. For
example: The input to the Floating Point Adder pipeline is:
X=A*2^a
Y=B*2^b
Registers are used for storing the intermediate results between the above
operations.
Instruction Pipeline
1. Structural
2. Data
3. Control
Structural Hazard
Hardware resource conflicts among the instructions in the pipeline cause structural hazards.
Memory, a GPR Register, or an ALU might all be used as resources here. When more than one
instruction in the pipe requires access to the very same resource in the same clock cycle, a
resource conflict is said to arise. In an overlapping pipelined execution, this is a circumstance
where the hardware cannot handle all potential combinations. Know more about structural
hazards here.
Control Hazards
Branch hazards are caused by branch instructions and are known as control hazards in
computer architecture. The flow of program/instruction execution is controlled by branch
instructions. Remember that conditional statements are used in higher-level languages for
iterative loops and condition testing (correlate with while, for, and if case statements). These
are converted into one of the BRANCH instruction variations. As a result, when the decision to
execute one instruction is reliant on the result of another instruction, such as a conditional
branch, which examines the condition’s consequent value, a conditional hazard develops. Know
more about control hazards in pipelining here.
Data Hazards occur when an instruction depends on the result of previous instruction and that
result of instruction has not yet been computed. whenever two different instructions use the
same storage. the location must appear as if it is executed in sequential order.
There are four types of data dependencies: Read after Write (RAW), Write after Read (WAR),
Write after Write (WAW), and Read after Read (RAR). These are explained as follows below.
Unit:-1