CC-04 Unit1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

Computer System

Architecture

Unit 1: Basic Computer Organization and Design

Computer registers
Bus system
Instruction set
Timing and control
Instruction cycle
Memory reference
Input-output and interrupt
Interconnection Structures
Bus Interconnection design of basic computer
#Computer Registers-
Computer registers are small, high-speed storage locations within the central
processing unit (CPU) of a computer. They are used to hold data temporarily
during processing operations. Registers play a crucial role in the execution of
instructions and in managing data flow within the CPU.

Here are some key points about computer registers:

1. Speed: Registers are the fastest storage elements in a computer system, with
access times measured in nanoseconds. This makes them ideal for storing data
that the CPU needs to access frequently and quickly.

2. Size: Registers are typically small in size, usually holding only a single data item
such as a binary number or an address. The size of a register is determined by
the computer architecture and can vary depending on the specific CPU design.

3. Types: Registers serve different purposes within the CPU. Some registers hold
data operands for arithmetic and logic operations, while others hold memory
addresses or control information. Common types of registers include:
- General-Purpose Registers: Used for storing data temporarily during
arithmetic and logic operations.
- Instruction Register (IR): Holds the currently executing instruction fetched
from memory.
- Program Counter (PC): Keeps track of the memory address of the next
instruction to be fetched.
- Memory Address Register (MAR): Holds the memory address of data to be
fetched or stored.
- Memory Data Register (MDR): Holds the data being transferred to or from
the memory.
- Status Register/Flags Register: Stores status flags indicating the outcome of
arithmetic/logic operations (e.g., zero flag, carry flag).

4. Data Transfer: Registers facilitate the transfer of data between different


components of the CPU and between the CPU and main memory. Data is
typically loaded into registers from memory before processing and then
transferred back to memory after processing is complete.

5. Control: Registers play a crucial role in the control flow of the CPU. They store
information necessary for executing instructions, managing interrupts, and
coordinating the operation of various CPU components.

Computer registers are small, high-speed storage locations within the CPU used
for temporarily holding data and control information during processing
operations. They are essential for efficient execution of instructions and data
manipulation within a computer system.

#Bus System-
In computer architecture, a bus system refers to a communication system that
enables data transfer between various components within a computer system.
It consists of a set of parallel conductors or wires that carry electrical signals
representing binary data, control signals, and addresses.

Here's a precise breakdown:


1. Data Transfer: The primary purpose of a bus system is to facilitate the transfer
of data between different components of the computer, such as the central
processing unit (CPU), memory modules, input/output (I/O) devices, and other
peripherals. Data travels along the bus in binary form, typically organized into
bytes or words.
2. Types of Buses:
- Data Bus: Carries data between the CPU, memory, and I/O devices. It's bi-
directional, allowing data to flow in both directions.
- Address Bus: Transmits memory addresses generated by the CPU to select
specific locations in memory for read or write operations.
- Control Bus: Carries control signals that coordinate the activities of various
components, including signals for memory read/write operations, interrupts,
and bus arbitration.

3. Parallelism: Buses often consist of multiple parallel lines, allowing several bits
of data to be transmitted simultaneously. For example, a 32-bit bus can transmit
32 bits of data in parallel, making data transfer more efficient compared to serial
communication.

4. Bus Width: Refers to the number of parallel lines in the bus, which determines
the amount of data that can be transferred in a single bus operation. A wider bus
allows for higher data transfer rates and larger data payloads.

5. Bus Arbitration: In systems with multiple bus masters (e.g., CPU and I/O
devices), bus arbitration protocols are used to manage access to the bus. This
ensures that only one device can control the bus at a time, preventing conflicts
and data corruption.

6. System Performance: The design and characteristics of the bus system


significantly impact the overall performance of the computer system. Factors
such as bus width, clock speed, and protocol efficiency influence the data
transfer rate and system responsiveness.
7. Expansion Slots and Interfaces: Buses also extend externally through
expansion slots or interfaces, allowing additional components like graphics
cards, network adapters, and storage devices to connect to the system and
communicate with the CPU and memory.

A bus system in computer architecture serves as a communication backbone,


enabling the transfer of data, addresses, and control signals between different
components within a computer system. It plays a crucial role in system
performance, scalability, and overall functionality.
#instruction set-
In computer architecture, an instruction set refers to the collection of all the
instructions that a processor is capable of executing. These instructions dictate
the operations that the processor can perform, such as arithmetic, logic, data
movement, and control flow.

Here's a precise breakdown:


1. Operations: An instruction set defines the fundamental operations that a
processor can perform on data. These operations include arithmetic operations
(addition, subtraction, multiplication, division), logical operations (AND, OR,
NOT), data movement (load, store, move), and control flow (branching, jumping,
subroutine calls).

2. Instruction Format: Each instruction in the set has a specific format that
specifies the operation to be performed and any associated operands or
parameters. The format typically includes fields for the opcode (operation code),
addressing mode, and operand(s).

3. Addressing Modes: Instruction sets include various addressing modes that


specify how operands are located or accessed. Common addressing modes
include immediate (operand is a constant value), direct (operand is a memory
address), register (operand is stored in a processor register), and indirect
(operand is the address of the data).

4. Registers-Instruction sets define the registers available in the processor and


specify how they can be used by instructions. Registers are small, fast storage
locations within the CPU used for temporary data storage and for facilitating
efficient data processing.
5. Instruction Execution: The processor's instruction execution unit interprets
and executes instructions from the instruction set. It fetches instructions from
memory, decodes them to determine the operation to be performed, retrieves
operands as necessary, executes the operation, and stores the result.

6. Instruction Pipelining: Many modern processors use instruction pipelining to


improve performance by overlapping the execution of multiple instructions.
Pipelining breaks down the execution of an instruction into several stages (fetch,
decode, execute, etc.), allowing multiple instructions to be processed
simultaneously.

7. Privileged Instructions: Some instructions in the instruction set may be


designated as privileged, meaning they can only be executed by the operating
system or privileged software. These instructions typically involve sensitive
operations such as modifying system control registers or accessing protected
resources.

The instruction set in computer architecture defines the operations, formats,


addressing modes, and registers that a processor supports, providing the
foundation for executing programs and performing computations. It plays a
central role in determining the capabilities and functionality of a processor.
RISC and CISC-
RISC (Reduced Instruction Set Computer) and CISC (Complex Instruction Set
Computer) are two contrasting design philosophies for processor architectures:

1. RISC (Reduced Instruction Set Computer):


- RISC processors have a simplified instruction set, with a small number of
instructions that are highly optimized for efficiency.
- The instructions in RISC architectures are typically simple and execute in one
clock cycle, promoting fast execution.
- RISC architectures often rely on optimizing compilers to translate higher-level
language code into efficient sequences of simple instructions.
- RISC architectures often employ a load/store architecture, where arithmetic
and logic operations only operate on data stored in registers, requiring separate
instructions to load data from memory and store results back into memory.

2. CISC (Complex Instruction Set Computer):


- CISC processors have a larger, more complex instruction set, with instructions
that can perform more complex operations and manipulate multiple operands
in a single instruction.
- CISC architectures often include specialized instructions for common tasks,
which can reduce the number of instructions needed to accomplish a given task.
- CISC architectures often support more addressing modes and provide direct
support for complex data structures, which can simplify programming in
assembly language.
- Historically, CISC architectures were designed to minimize the number of
instructions executed per task, aiming for better performance by reducing
memory access overhead.

While these two architectures have been historically distinct, the boundary
between them has blurred over time with advancements in technology and
design. Many modern processors incorporate elements of both RISC and CISC
architectures, and the distinction is not as clear-cut as it once was. For example,
many processors today employ a RISC core with additional hardware features
and microcode to support CISC-like instructions for compatibility with legacy
software.

Instruction Types
By the number of operands: / any number of addresses.
Note:- The arrangement of a computer’s registers determines the different
address fields in the instruction format.
1.Zero-address instructions: Operate on no data directly (e.g., branch
instructions like "jump to a specific memory location").
2.One-address instructions: Have one operand, used as source or destination
(e.g., "increment the value in register 5").
3.Two-address instructions: Have two operands, one source and one destination
(e.g., "add the values in register 2 and 3, store the result in register 1").
4.Three-address instructions: Have three operands, involving source,
destination, and another source (e.g., "copy the data from memory location 100
to register 4").

By the memory access:


Memory-reference instructions: Access data in memory (e.g., "load data from
memory address 200 into register 3").
Register-reference instructions: Operate on data in registers (e.g., "add the
values in register 1 and 2").
Input/output instructions: Interact with input/output devices (e.g., "read data
from the keyboard").

By the operation performed:


Arithmetic instructions: Perform arithmetic operations (e.g., addition,
subtraction, multiplication, division).
Logical instructions: Perform logical operations (e.g., AND, OR, NOT). Data
transfer instructions: Move data between memory and registers (e.g., load,
store).
Control flow instructions: Change the sequence of execution (e.g., branch,
jump).
Special instructions: Perform specific tasks unique to the architecture (e.g.,
floating-point operations).

By the instruction length:


Fixed-length instructions: All instructions have the same size.
Variable-length instructions: Instructions can have different sizes depending on
the operation and operands.

Instruction Types –II


Define the operations that a processor can perform on data. Here
are some common instruction types:--
1. Arithmetic Instructions: These instructions perform basic arithmetic
operations such as addition, subtraction, multiplication, and division. They
manipulate numerical data within the processor.
2. Logical Instructions: Logical instructions perform logical operations such as
AND, OR, XOR, and NOT. These operations are typically performed on binary
data at the bit level.
3. Data Transfer Instructions: Data transfer instructions move data between
different memory locations or between memory and CPU registers. Examples
include load (copy data from memory to register) and store (copy data from
register to memory) instructions.
4. Control Transfer Instructions: Control transfer instructions change the
sequence of execution by altering the program counter. Examples include
branch instructions (conditional and unconditional jumps) and subroutine
calls/returns.
5. Input/Output Instructions (I/O): These instructions are used to transfer data
between the CPU and external devices such as keyboards, displays, disks, and
network interfaces.
6. Comparison Instructions: Comparison instructions are used to compare two
values and set flags or registers based on the result (e.g., equal, greater than,
less than).
7. Move Instructions: These instructions move data from one location to
another without modifying the data itself. They are similar to data transfer
instructions but are used for moving data within registers or between different
parts of a register.
8. Shift and Rotate Instructions: Shift instructions move the bits of a binary
value left or right, either shifting in zeros or the sign bit. Rotate instructions
also shift bits but wrap the shifted-out bits around to the opposite end.
9. Floating-Point Instructions: Floating-point instructions perform arithmetic
operations on floating-point numbers, which represent numbers with fractional
parts or very large/small values.
10. String Instructions: These instructions are used for manipulating strings of
characters in memory, such as copying, comparing, or searching for substrings.
Note-These are just some examples, and the specific instruction types may vary
depending on the architecture and instruction set of a particular CPU. Different
processors may support additional instruction types beyond those listed here.

#Timing and Control-


Timing and control in computer architecture refer to the processes and
mechanisms by which the activities of various components within a computer
system are synchronized, coordinated, and managed to ensure proper
operation and execution of instructions.

Here's a concise definition for each:


1. Timing: Timing involves the regulation of the sequence and duration of
operations within the computer system. It encompasses the synchronization of
events based on a system clock signal, ensuring that actions occur at the
correct time and in the proper sequence. Timing mechanisms control the rate
at which instructions are executed, data is transferred, and signals are
processed to maintain system integrity and performance.

2. Control: Control refers to the management and coordination of activities


within the computer system. It encompasses the generation and distribution of
control signals to various components, directing their behavior and ensuring
proper interaction. Control mechanisms manage instruction execution,
memory access, input/output operations, interrupt handling, and overall
system operation to execute programs, handle data, and respond to external
events effectively.

Timing and control mechanisms in computer architecture govern the


synchronization and coordination of system activities to ensure accurate and
efficient operation, enabling the computer system to execute instructions,
process data, and interact with peripherals reliably.

#Instruction Cycle-
In computer architecture, the instruction cycle, also known as the fetch-
decode-execute cycle, is the fundamental process by which a computer
processor executes instructions. It consists of several distinct steps that
are repeated for each instruction in a program.

Here's a breakdown of each step in the instruction cycle:

1. Fetch:

- The processor fetches the next instruction from memory.

- The address of the next instruction to be fetched is usually stored in a


special register called the program counter (PC).
- The instruction is retrieved from the memory location pointed to by
the program counter.

- The fetched instruction is loaded into a special register called the


instruction register (IR).

2. Decode:

- The fetched instruction is decoded to determine the operation to be


performed and the operands involved.

- The opcode (operation code) portion of the instruction is examined


to identify the type of operation.

- The addressing mode of the instruction is determined to ascertain


how the operands will be accessed.

3. Execute:

- The processor performs the operation specified by the decoded


instruction.

- This may involve arithmetic or logic computations, data movement


between registers or memory, or control flow changes (e.g., branching or
jumping).

- If necessary, the processor retrieves operands from memory or


registers and performs the operation.

- The result of the operation is stored in the appropriate destination,


such as a register or memory location.

4. Write Back:
- If the instruction resulted in a value that needs to be stored, such as
the result of an arithmetic operation, it is written back to the appropriate
destination.

- The processor updates any necessary status flags or registers to


reflect the outcome of the operation.

- Control may then return to the fetch stage to begin fetching the next
instruction.

The instruction cycle repeats continuously, with the processor fetching,


decoding, executing, and writing back instructions as long as the
program is running. Each cycle advances the program counter to fetch
the next instruction in sequence, allowing the program to progress. The
efficiency of the instruction cycle greatly influences the performance of
the processor and the overall speed of program execution.
Instruction execution : Instruction execution needs the following steps, which
are-

• PC (program counter) register of the processor gives the address of the


instruction which needs to be fetched from the memory.

• If the instruction is fetched then, the instruction opcode is decoded. On


decoding, the processor identifies the number of operands. If there is any
operand to be fetched from the memory, then that operand address is
calculated.

• Operands are fetched from the memory. If there is more than one operand,
then the operand fetching process may be repeated (i.e. address calculation
and fetching operands).

• After this, the data operation is performed on the operands, and a result is
generated.

• If the result has to be stored in a register, the instructions end here.

• If the destination is memory, then first the destination address has to be


calculated. Then the result is then stored in the memory. If there are multiple
results which need to be stored inside the memory, then this process may
repeat (i.e. destination address calculation and store result).
• Now the current instructions have been executed. Side by side, the PC is
incremented to calculate the address of the next instruction. • The above
instruction cycle then repeats for further instructions.

#Memory Reference-
In computer architecture, a memory reference refers to the process of
accessing data stored in the computer's memory. This access typically involves
reading data from or writing data to specific memory locations. Memory
references are a fundamental aspect of program execution and data
manipulation in a computer system.

Here's a breakdown of memory reference:


1. Read Operation: In a read memory reference, the processor retrieves data
from a specific memory address. The processor sends the address of the
desired data to the memory subsystem, which retrieves the data stored at that
address and returns it to the processor. The processor can then use the
retrieved data for further processing.

2. Write Operation: In a write memory reference, the processor stores data


into a specific memory address. The processor sends both the address and the
data to be written to the memory subsystem. The memory subsystem then
stores the data at the specified address in the memory.

3. Addressing Modes: Memory references can involve different addressing


modes, which determine how the memory address is calculated or specified.
Common addressing modes include direct addressing (using a fixed memory
address), indirect addressing (using a memory address stored in another
location), and indexed addressing (using a base address plus an offset).

4. Data Movement: Memory references facilitate the movement of data


between the processor and memory, allowing programs to access and
manipulate data stored in memory. This includes loading data into registers for
processing, storing results back into memory, and transferring data between
different memory locations.

5. Cache Memory: In modern computer architectures, memory references also


involve interactions with cache memory. Cache memory is a small, high-speed
memory located closer to the processor than main memory. The processor
often checks the cache first when performing memory references. If the
required data is found in the cache (cache hit), the processor can access it
more quickly. Otherwise, the data must be retrieved from main memory (cache
miss).

6. Performance Considerations: Memory references play a crucial role in


determining the performance of a computer system. Efficient memory access
patterns, proper utilization of caching mechanisms, and minimizing the number
of memory accesses can significantly improve the overall performance of
programs and systems.

Memory references in computer architecture involve accessing data stored in


memory through read or write operations. These references are fundamental
to program execution and data manipulation, and their efficient management
is essential for achieving optimal system performance.

#Input Output and Interrupt-


In computer architecture, input/output (I/O) and interrupts are essential
components that enable a computer system to communicate with external
devices and handle asynchronous events efficiently.

Here's an explanation of each:


1. Input/Output (I/O):
- Input/output refers to the process of exchanging data between the
computer system and external devices such as keyboards, mice, displays,
storage devices, and network interfaces.
- Input devices provide data to the computer system, while output devices
receive data from the computer system.
- I/O operations involve transferring data between the CPU and I/O devices,
typically through dedicated I/O interfaces or controllers.
- The CPU communicates with I/O devices using specialized instructions and
I/O addresses, which enable data transfer, status querying, and control
operations.
- I/O devices may operate at different speeds and use various communication
protocols, necessitating efficient I/O mechanisms to handle data transfers
without significantly impacting CPU performance.
- Modern computer architectures often employ techniques such as direct
memory access (DMA) and I/O channels/controllers to offload data transfer
tasks from the CPU and improve overall system performance.

2. Interrupts:
- Interrupts are asynchronous signals generated by hardware devices or
software events that require immediate attention from the CPU.
- Interrupts enable the CPU to respond to external events and handle time-
critical tasks efficiently without continuously polling devices or waiting for
events to occur.
- When an interrupt occurs, the CPU temporarily suspends its current
execution and transfers control to an interrupt handler or interrupt service
routine (ISR) associated with the interrupt source.
- The interrupt handler executes the necessary operations to handle the
interrupt, such as servicing the I/O device, updating system status, or
performing context switching.
- Interrupts are prioritized based on their urgency and importance, allowing
the CPU to handle multiple concurrent interrupts in a predefined order.
- Interrupts can be classified into various types, including hardware interrupts
(e.g., I/O interrupts, timer interrupts) and software interrupts (e.g., system
calls, software exceptions).
- Interrupt handling involves saving the CPU's current state, including the
program counter and processor registers, before executing the interrupt
handler. Once the interrupt is serviced, the CPU restores its previous state and
resumes normal execution.

Input/output (I/O) and interrupts are fundamental concepts in computer


architecture that enable the interaction between a computer system and
external devices, as well as the efficient handling of asynchronous events.
These mechanisms play a crucial role in ensuring the responsiveness, flexibility,
and overall functionality of modern computing systems.

Interrupt Cycle
In computer architecture, the interrupt cycle refers to the sequence of events
that occur when the CPU receives and handles an interrupt signal from an
external device or an internal source. The interrupt cycle enables the CPU to
respond to asynchronous events promptly without delaying the execution of
ongoing tasks.

Here's an overview of the interrupt cycle:


1. Interrupt Request (IRQ):
- An interrupt request is generated by an external device (e.g., I/O controller,
timer, or peripheral) or an internal condition (e.g., hardware error, software
exception).
- The device or condition asserts an interrupt signal to request the CPU's
attention.

2. Interrupt Signal Detection:


- The CPU continuously monitors the interrupt lines to detect incoming
interrupt signals.
- When an interrupt signal is detected, the CPU temporarily suspends its
current execution and acknowledges the interrupt.

3. Interrupt Handling:
- Upon detecting an interrupt signal, the CPU transfers control to a predefined
interrupt handler or interrupt service routine (ISR) associated with the specific
interrupt source.
- The interrupt handler is a special software routine designed to handle the
specific event or condition that triggered the interrupt.
- The CPU saves the current execution state (e.g., program counter, processor
registers) onto the stack or in designated interrupt context storage to ensure
that the interrupted task can be resumed later.

4. Interrupt Service Routine (ISR) Execution:


- The CPU begins executing the ISR, which performs the necessary actions to
handle the interrupt.
- These actions may include servicing the requesting device, updating system
status or data structures, initiating I/O operations, or responding to system
events.
- The ISR typically executes as quickly and efficiently as possible to minimize
the interruption to normal system operation.

5. Interrupt Completion:
- Once the ISR completes its execution and the interrupt request is serviced,
the CPU restores the saved execution state from the stack or interrupt context
storage.
- This includes restoring the program counter and processor registers to
resume execution of the interrupted task seamlessly.

6. Resumption of Normal Execution:


- After completing the ISR, the CPU resumes execution of the interrupted task
from the point where it was interrupted.
- The CPU continues executing instructions as usual until another interrupt
occurs or until the current task is completed.

The interrupt cycle allows the CPU to handle various types of asynchronous
events efficiently while maintaining responsiveness and multitasking
capabilities in the computer system. Proper handling of interrupts is essential
for ensuring reliable and timely operation of modern computing systems.

#Interconnection Structure-
In computer architecture, the interconnection structure refers to the
arrangement and organization of connections between various components
within a computer system. These connections facilitate communication, data
transfer, and coordination among different hardware elements, such as
processors, memory modules, input/output devices, and other peripherals. The
interconnection structure plays a crucial role in determining the system's
performance, scalability, and overall functionality.

Here's an explanation of key aspects of the interconnection structure:


1. Bus-Based Interconnection:
- A common approach to interconnection is the use of a shared
communication bus, such as the system bus or memory bus.
- In a bus-based architecture, components are connected to the bus, which
serves as a communication medium for transferring data and control signals
between them.
- The bus typically consists of multiple parallel lines (data lines, address lines,
control lines) through which information flows between components.
- Bus-based interconnection is simple and cost-effective but may lead to
congestion and limited scalability as the number of connected devices
increases.

2. Point-to-Point Interconnection:
- In point-to-point interconnection, components are connected directly to
each other using dedicated communication links.
- Each connection forms a direct path between two specific components,
allowing for high-speed, low-latency communication.
- Point-to-point interconnection is commonly used in modern architectures
for connecting processors, memory modules, and high-speed I/O devices.
- It offers greater scalability and performance compared to bus-based
interconnection but may be more complex and expensive to implement.
3. Network Interconnection:
- Network-based interconnection involves connecting components using
network protocols and communication protocols, such as Ethernet or
InfiniBand.
- Components communicate over a network using packet-switching
techniques, similar to those used in computer networks.
- Network interconnection is highly scalable and flexible, allowing for the
connection of distributed systems and clusters of computers.
- It is commonly used in large-scale parallel and distributed computing
environments, such as data centres and supercomputers.

4. Switched Interconnection:
- Switched interconnection architectures employ switches or routers to direct
data packets between multiple interconnected components.
- Switches dynamically route data packets based on destination addresses,
allowing for efficient and flexible communication.
- Switched interconnection is commonly used in high-performance computing
systems, storage area networks (SANs), and distributed computing
environments.

5. Topology:
- The interconnection structure's topology refers to the arrangement of
connections between components.
- Common topologies include bus topology, star topology, mesh topology, and
tree topology, each offering different trade-offs in terms of performance,
scalability, and fault tolerance.
The interconnection structure in computer architecture defines how
components are connected and communicate within a computer system. The
choice of interconnection architecture and topology depends on factors such
as system requirements, performance goals, scalability

Interconnected Structure
Time Shared Bus – Interconnection structure in Multiprocessor System
Interconnection structures :
The processors must be able to share a set of main memory modules & I/O
devices in a multiprocessor system. This sharing capability can be provided
through interconnection structures. The interconnection structure that are
commonly used can be given as follows –
1. Time-shared / Common Bus
2. Cross bar Switch
3. Multiport Memory
4. Multistage Switching Network (Covered in 2nd part)
5. Hypercube System
In this article, we will cover Time shared / Common Bus in detail.
1. Time-shared / Common Bus (Interconnection structure in Multiprocessor
System) :
In a multiprocessor system, the time shared bus interconnection provides a
common communication path connecting all the functional units like
processor, I/O processor, memory unit etc. The figure below shows the
multiple processors with common communication path (single bus).
To communicate with any functional unit, processor needs the bus to transfer
the data. To do so, the processor first need to see that whether the bus is
available / not by checking the status of the bus. If the bus is used by some
other functional unit, the status is busy, else free.

A processor can use bus only when the bus is free. The sender processor puts
the address of the destination on the bus & the destination unit identifies it. In
order to communicate with any functional unit, a command is issued to tell
that unit, what work is to be done. The other processors at that time will be
either busy in internal operations or will sit free, waiting to get bus.

We can use a bus controller to resolve conflicts, if any. (Bus controller can set
priority of different functional units)

This Single-Bus Multiprocessor Organization is easiest to reconfigure & is


simple. This interconnection structure contains only passive elements. The bus
interfaces of sender & receiver units controls the transfer operation here. To
decide the access to common bus without conflicts, methods such as static &
fixed priorities, First-In-Out (FIFO) queues & daisy chains can be used.

Advantages –
• Inexpensive as no extra hardware is required such as switch.
• Simple & easy to configure as the functional units are directly connected
to the bus .
Disadvantages –
• Major fight with this kind of configuration is that if malfunctioning occurs
in any of the bus interface circuits, complete system will fail.
• Decreased throughput —
At a time, only one processor can communicate with any other
functional unit.
• Increased arbitration logic —
As the number of processors & memory unit increases, the bus contention
problem increases.
To solve the above disadvantages, we can use two uni-directional buses as :

Both the buses are required in a single transfer operation. Here, the system
complexity is increased & the reliability is decreased, The solution is to use
multiple bi-directional buses.

Multiple bi-directional buses :


The multiple bi-directional buses means that in the system there are multiple
buses that are bi-directional. It permits simultaneous transfers as many as
buses are available. But here also the complexity of the system is increased.
Apart from the organization, there are many factors affecting the performance
of bus. They are –
• Number of active devices on the bus.
• Data width
• Error Detection method
• Synchronization of data transfer etc.

Advantages of Multiple bi-directional buses –


• Lowest cost for hardware as no extra device is needed such as switch.

• Modifying the hardware system configuration is easy.


• Less complex when compared to other interconnection schemes as there
are only 2 buses & all the components are connected via that buses.
Disadvantages of Multiple bi-directional buses –
• System Expansion will degrade the performance because as the number

of functional unit increases, more communication is required but at a


time only 1 transfer can happen via 1 bus.
• Overall system capacity limits the transfer rate & If bus fails, whole
system will fail.
• Suitable for small systems only.
2. Crossbar Switch :
A point is reached at which there is a separate path available for each memory
module, if the number of buses in common bus system is increased. Crossbar
Switch (for multiprocessors) provides separate path for each module.

3.Multiport Memory :
In Multiport Memory system, the control, switching & priority arbitration logic
are distributed throughout the crossbar switch matrix which is distributed at
the interfaces to the memory modules.

4.Hypercube Interconnection :
This is a binary n-cube architecture. Here we can connect 2n processors and
each of the processor here forms a node of the cube. A node can be memory
module, I/O interface also, not necessarily processor. The processor at a node
has communication path that is direct goes to n other nodes (total 2n nodes).
There are total 2n distinct nbit binary addresses.

Time Shared Bus Crossbar Switch Multiport Memory

1. Lowest cost for hardware & Cost-effective for As most of the control
least complex. multiprocessors only as a & switching circuitry is
basic switching matrix is in the memory unit, it
required (to assemble is expensive.
functional
units.)

2. System Expansion will System Expansion can improve It is difficult to expand


degrade performance. performance. the system(design).
3. Overall, system capacity Transfer rate is high but more Potential for a very
limits the transfer rate & complex. high total transfer
if the bus fails, the whole rate.
system will fail.

4. Modifying the hardware Limited expansion of system, Lots of cables &


system configuration is only by size of switch matrix. connectors are
easy. required.

5. We cannot have transfers We can have transfers from all We can have transfers
from all memory modules memory modules from all memory
simultaneously. simultaneously. modules
simultaneously.

6. Lowest Efficiency & Functional Units are the Functional units


suitable for smaller simplest & permit low-cost-
systems only. cheapest. uniprocessor.

Difference between Time Shared Bus, Crossbar Switch & Multiport


Memory

Conclusion :
Interconnection structure can decide overall system’s performance in a multi
processor environment. Although using common bus system is much easy &
simple, but the availability of only 1 path is its major drawback & if the bus
fails, whole system fails. To overcome this & improve overall performance,
crossbar, multi port, hypercube & then multistage switch network evolved.

#Bus Interconnection design of basic computer-


In the design of a basic computer, the bus interconnection plays a vital role in
facilitating communication between different components of the system. A
basic computer typically consists of a CPU (Central Processing Unit), memory,
and various input/output (I/O) devices. The bus interconnection design ensures
that these components can exchange data, instructions, and control signals
effectively.

Here's an explanation of the bus interconnection design in a basic computer:


1. Data Bus:
- The data bus is a set of parallel wires or conductors used to transfer data
between the CPU, memory, and I/O devices.
- It carries binary data in the form of bytes or words (multiple bytes) between
components.
- The width of the data bus determines the maximum number of bits that can
be transferred simultaneously. For example, an 8-bit data bus can transfer 8
bits (1 byte) at a time, while a 16-bit data bus can transfer 16 bits (2 bytes) at a
time.
- The data bus is bidirectional, allowing data to flow in both directions
between components.

2. Address Bus:
- The address bus is another set of parallel wires used by the CPU to specify
memory addresses or I/O device addresses for data transfer.
- It carries binary addresses that identify the locations in memory or I/O
space where data is to be read from or written to.
- The width of the address bus determines the maximum number of memory
locations or I/O addresses that can be addressed. For example, a 16-bit address
bus can address up to 64 KB of memory (2^16 = 65,536 bytes).
- The address bus is unidirectional, as addresses are generated by the CPU
and sent to memory or I/O devices.

3. Control Bus:
- The control bus consists of various control signals used to coordinate and
control the operations of the CPU, memory, and I/O devices.
- It carries signals such as read/write signals, memory enable signals,
interrupt signals, clock signals, and bus control signals.
- The control bus signals indicate the type of operation to be performed (e.g.,
read or write), the direction of data transfer, and the timing of operations.
- The control bus ensures proper synchronization and coordination of
activities within the computer system.

4. Bus Arbitration:
- In systems with multiple bus masters (e.g., CPU and I/O devices), bus
arbitration mechanisms are used to manage access to the bus.
- Bus arbitration determines which device has control of the bus at any given
time, preventing conflicts and ensuring that data transfer occurs smoothly.
- Common bus arbitration schemes include priority-based arbitration, round-
robin arbitration, and centralized arbitration.
Overall, the bus interconnection design of a basic computer facilitates the
exchange of data, addresses, and control signals between the CPU, memory,
and I/O devices. It ensures efficient communication and coordination of
activities within the system, enabling the computer to execute instructions and
perform tasks effectively.

You might also like