CC-04 Unit1
CC-04 Unit1
CC-04 Unit1
Architecture
Computer registers
Bus system
Instruction set
Timing and control
Instruction cycle
Memory reference
Input-output and interrupt
Interconnection Structures
Bus Interconnection design of basic computer
#Computer Registers-
Computer registers are small, high-speed storage locations within the central
processing unit (CPU) of a computer. They are used to hold data temporarily
during processing operations. Registers play a crucial role in the execution of
instructions and in managing data flow within the CPU.
1. Speed: Registers are the fastest storage elements in a computer system, with
access times measured in nanoseconds. This makes them ideal for storing data
that the CPU needs to access frequently and quickly.
2. Size: Registers are typically small in size, usually holding only a single data item
such as a binary number or an address. The size of a register is determined by
the computer architecture and can vary depending on the specific CPU design.
3. Types: Registers serve different purposes within the CPU. Some registers hold
data operands for arithmetic and logic operations, while others hold memory
addresses or control information. Common types of registers include:
- General-Purpose Registers: Used for storing data temporarily during
arithmetic and logic operations.
- Instruction Register (IR): Holds the currently executing instruction fetched
from memory.
- Program Counter (PC): Keeps track of the memory address of the next
instruction to be fetched.
- Memory Address Register (MAR): Holds the memory address of data to be
fetched or stored.
- Memory Data Register (MDR): Holds the data being transferred to or from
the memory.
- Status Register/Flags Register: Stores status flags indicating the outcome of
arithmetic/logic operations (e.g., zero flag, carry flag).
5. Control: Registers play a crucial role in the control flow of the CPU. They store
information necessary for executing instructions, managing interrupts, and
coordinating the operation of various CPU components.
Computer registers are small, high-speed storage locations within the CPU used
for temporarily holding data and control information during processing
operations. They are essential for efficient execution of instructions and data
manipulation within a computer system.
#Bus System-
In computer architecture, a bus system refers to a communication system that
enables data transfer between various components within a computer system.
It consists of a set of parallel conductors or wires that carry electrical signals
representing binary data, control signals, and addresses.
3. Parallelism: Buses often consist of multiple parallel lines, allowing several bits
of data to be transmitted simultaneously. For example, a 32-bit bus can transmit
32 bits of data in parallel, making data transfer more efficient compared to serial
communication.
4. Bus Width: Refers to the number of parallel lines in the bus, which determines
the amount of data that can be transferred in a single bus operation. A wider bus
allows for higher data transfer rates and larger data payloads.
5. Bus Arbitration: In systems with multiple bus masters (e.g., CPU and I/O
devices), bus arbitration protocols are used to manage access to the bus. This
ensures that only one device can control the bus at a time, preventing conflicts
and data corruption.
2. Instruction Format: Each instruction in the set has a specific format that
specifies the operation to be performed and any associated operands or
parameters. The format typically includes fields for the opcode (operation code),
addressing mode, and operand(s).
While these two architectures have been historically distinct, the boundary
between them has blurred over time with advancements in technology and
design. Many modern processors incorporate elements of both RISC and CISC
architectures, and the distinction is not as clear-cut as it once was. For example,
many processors today employ a RISC core with additional hardware features
and microcode to support CISC-like instructions for compatibility with legacy
software.
Instruction Types
By the number of operands: / any number of addresses.
Note:- The arrangement of a computer’s registers determines the different
address fields in the instruction format.
1.Zero-address instructions: Operate on no data directly (e.g., branch
instructions like "jump to a specific memory location").
2.One-address instructions: Have one operand, used as source or destination
(e.g., "increment the value in register 5").
3.Two-address instructions: Have two operands, one source and one destination
(e.g., "add the values in register 2 and 3, store the result in register 1").
4.Three-address instructions: Have three operands, involving source,
destination, and another source (e.g., "copy the data from memory location 100
to register 4").
#Instruction Cycle-
In computer architecture, the instruction cycle, also known as the fetch-
decode-execute cycle, is the fundamental process by which a computer
processor executes instructions. It consists of several distinct steps that
are repeated for each instruction in a program.
1. Fetch:
2. Decode:
3. Execute:
4. Write Back:
- If the instruction resulted in a value that needs to be stored, such as
the result of an arithmetic operation, it is written back to the appropriate
destination.
- Control may then return to the fetch stage to begin fetching the next
instruction.
• Operands are fetched from the memory. If there is more than one operand,
then the operand fetching process may be repeated (i.e. address calculation
and fetching operands).
• After this, the data operation is performed on the operands, and a result is
generated.
#Memory Reference-
In computer architecture, a memory reference refers to the process of
accessing data stored in the computer's memory. This access typically involves
reading data from or writing data to specific memory locations. Memory
references are a fundamental aspect of program execution and data
manipulation in a computer system.
2. Interrupts:
- Interrupts are asynchronous signals generated by hardware devices or
software events that require immediate attention from the CPU.
- Interrupts enable the CPU to respond to external events and handle time-
critical tasks efficiently without continuously polling devices or waiting for
events to occur.
- When an interrupt occurs, the CPU temporarily suspends its current
execution and transfers control to an interrupt handler or interrupt service
routine (ISR) associated with the interrupt source.
- The interrupt handler executes the necessary operations to handle the
interrupt, such as servicing the I/O device, updating system status, or
performing context switching.
- Interrupts are prioritized based on their urgency and importance, allowing
the CPU to handle multiple concurrent interrupts in a predefined order.
- Interrupts can be classified into various types, including hardware interrupts
(e.g., I/O interrupts, timer interrupts) and software interrupts (e.g., system
calls, software exceptions).
- Interrupt handling involves saving the CPU's current state, including the
program counter and processor registers, before executing the interrupt
handler. Once the interrupt is serviced, the CPU restores its previous state and
resumes normal execution.
Interrupt Cycle
In computer architecture, the interrupt cycle refers to the sequence of events
that occur when the CPU receives and handles an interrupt signal from an
external device or an internal source. The interrupt cycle enables the CPU to
respond to asynchronous events promptly without delaying the execution of
ongoing tasks.
3. Interrupt Handling:
- Upon detecting an interrupt signal, the CPU transfers control to a predefined
interrupt handler or interrupt service routine (ISR) associated with the specific
interrupt source.
- The interrupt handler is a special software routine designed to handle the
specific event or condition that triggered the interrupt.
- The CPU saves the current execution state (e.g., program counter, processor
registers) onto the stack or in designated interrupt context storage to ensure
that the interrupted task can be resumed later.
5. Interrupt Completion:
- Once the ISR completes its execution and the interrupt request is serviced,
the CPU restores the saved execution state from the stack or interrupt context
storage.
- This includes restoring the program counter and processor registers to
resume execution of the interrupted task seamlessly.
The interrupt cycle allows the CPU to handle various types of asynchronous
events efficiently while maintaining responsiveness and multitasking
capabilities in the computer system. Proper handling of interrupts is essential
for ensuring reliable and timely operation of modern computing systems.
#Interconnection Structure-
In computer architecture, the interconnection structure refers to the
arrangement and organization of connections between various components
within a computer system. These connections facilitate communication, data
transfer, and coordination among different hardware elements, such as
processors, memory modules, input/output devices, and other peripherals. The
interconnection structure plays a crucial role in determining the system's
performance, scalability, and overall functionality.
2. Point-to-Point Interconnection:
- In point-to-point interconnection, components are connected directly to
each other using dedicated communication links.
- Each connection forms a direct path between two specific components,
allowing for high-speed, low-latency communication.
- Point-to-point interconnection is commonly used in modern architectures
for connecting processors, memory modules, and high-speed I/O devices.
- It offers greater scalability and performance compared to bus-based
interconnection but may be more complex and expensive to implement.
3. Network Interconnection:
- Network-based interconnection involves connecting components using
network protocols and communication protocols, such as Ethernet or
InfiniBand.
- Components communicate over a network using packet-switching
techniques, similar to those used in computer networks.
- Network interconnection is highly scalable and flexible, allowing for the
connection of distributed systems and clusters of computers.
- It is commonly used in large-scale parallel and distributed computing
environments, such as data centres and supercomputers.
4. Switched Interconnection:
- Switched interconnection architectures employ switches or routers to direct
data packets between multiple interconnected components.
- Switches dynamically route data packets based on destination addresses,
allowing for efficient and flexible communication.
- Switched interconnection is commonly used in high-performance computing
systems, storage area networks (SANs), and distributed computing
environments.
5. Topology:
- The interconnection structure's topology refers to the arrangement of
connections between components.
- Common topologies include bus topology, star topology, mesh topology, and
tree topology, each offering different trade-offs in terms of performance,
scalability, and fault tolerance.
The interconnection structure in computer architecture defines how
components are connected and communicate within a computer system. The
choice of interconnection architecture and topology depends on factors such
as system requirements, performance goals, scalability
Interconnected Structure
Time Shared Bus – Interconnection structure in Multiprocessor System
Interconnection structures :
The processors must be able to share a set of main memory modules & I/O
devices in a multiprocessor system. This sharing capability can be provided
through interconnection structures. The interconnection structure that are
commonly used can be given as follows –
1. Time-shared / Common Bus
2. Cross bar Switch
3. Multiport Memory
4. Multistage Switching Network (Covered in 2nd part)
5. Hypercube System
In this article, we will cover Time shared / Common Bus in detail.
1. Time-shared / Common Bus (Interconnection structure in Multiprocessor
System) :
In a multiprocessor system, the time shared bus interconnection provides a
common communication path connecting all the functional units like
processor, I/O processor, memory unit etc. The figure below shows the
multiple processors with common communication path (single bus).
To communicate with any functional unit, processor needs the bus to transfer
the data. To do so, the processor first need to see that whether the bus is
available / not by checking the status of the bus. If the bus is used by some
other functional unit, the status is busy, else free.
A processor can use bus only when the bus is free. The sender processor puts
the address of the destination on the bus & the destination unit identifies it. In
order to communicate with any functional unit, a command is issued to tell
that unit, what work is to be done. The other processors at that time will be
either busy in internal operations or will sit free, waiting to get bus.
We can use a bus controller to resolve conflicts, if any. (Bus controller can set
priority of different functional units)
Advantages –
• Inexpensive as no extra hardware is required such as switch.
• Simple & easy to configure as the functional units are directly connected
to the bus .
Disadvantages –
• Major fight with this kind of configuration is that if malfunctioning occurs
in any of the bus interface circuits, complete system will fail.
• Decreased throughput —
At a time, only one processor can communicate with any other
functional unit.
• Increased arbitration logic —
As the number of processors & memory unit increases, the bus contention
problem increases.
To solve the above disadvantages, we can use two uni-directional buses as :
Both the buses are required in a single transfer operation. Here, the system
complexity is increased & the reliability is decreased, The solution is to use
multiple bi-directional buses.
3.Multiport Memory :
In Multiport Memory system, the control, switching & priority arbitration logic
are distributed throughout the crossbar switch matrix which is distributed at
the interfaces to the memory modules.
4.Hypercube Interconnection :
This is a binary n-cube architecture. Here we can connect 2n processors and
each of the processor here forms a node of the cube. A node can be memory
module, I/O interface also, not necessarily processor. The processor at a node
has communication path that is direct goes to n other nodes (total 2n nodes).
There are total 2n distinct nbit binary addresses.
1. Lowest cost for hardware & Cost-effective for As most of the control
least complex. multiprocessors only as a & switching circuitry is
basic switching matrix is in the memory unit, it
required (to assemble is expensive.
functional
units.)
5. We cannot have transfers We can have transfers from all We can have transfers
from all memory modules memory modules from all memory
simultaneously. simultaneously. modules
simultaneously.
Conclusion :
Interconnection structure can decide overall system’s performance in a multi
processor environment. Although using common bus system is much easy &
simple, but the availability of only 1 path is its major drawback & if the bus
fails, whole system fails. To overcome this & improve overall performance,
crossbar, multi port, hypercube & then multistage switch network evolved.
2. Address Bus:
- The address bus is another set of parallel wires used by the CPU to specify
memory addresses or I/O device addresses for data transfer.
- It carries binary addresses that identify the locations in memory or I/O
space where data is to be read from or written to.
- The width of the address bus determines the maximum number of memory
locations or I/O addresses that can be addressed. For example, a 16-bit address
bus can address up to 64 KB of memory (2^16 = 65,536 bytes).
- The address bus is unidirectional, as addresses are generated by the CPU
and sent to memory or I/O devices.
3. Control Bus:
- The control bus consists of various control signals used to coordinate and
control the operations of the CPU, memory, and I/O devices.
- It carries signals such as read/write signals, memory enable signals,
interrupt signals, clock signals, and bus control signals.
- The control bus signals indicate the type of operation to be performed (e.g.,
read or write), the direction of data transfer, and the timing of operations.
- The control bus ensures proper synchronization and coordination of
activities within the computer system.
4. Bus Arbitration:
- In systems with multiple bus masters (e.g., CPU and I/O devices), bus
arbitration mechanisms are used to manage access to the bus.
- Bus arbitration determines which device has control of the bus at any given
time, preventing conflicts and ensuring that data transfer occurs smoothly.
- Common bus arbitration schemes include priority-based arbitration, round-
robin arbitration, and centralized arbitration.
Overall, the bus interconnection design of a basic computer facilitates the
exchange of data, addresses, and control signals between the CPU, memory,
and I/O devices. It ensures efficient communication and coordination of
activities within the system, enabling the computer to execute instructions and
perform tasks effectively.