Number of Bits Required For Memory Address

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

1.a.)What are different write policies?

Ans: The different write policies are:


1. Write-Through
2. Write-Back
3. Write-Allocate
4. Write-Non-Allocate
5. Write-Once
6. Write-Many

b.)What are the functions to be performed by an I/O interface?


Ans: An I/O interface facilitates data transfer between the CPU and external devices and also manages protocols,
buffers data, handles errors, and controls device operations and status.
c.)Differentiate b/w RISC and CISC?
Ans: RISC (Reduced Instruction Set Computing):
1)Emphasizes a simplified instruction set with a focus on basic operations.
2)Prioritizes single-cycle instruction execution for faster processing.
CISC (Complex Instruction Set Computing):
1)Includes a diverse range of instructions, including complex operations and addressing modes.
2)Allows for multi-cycle instruction execution, including microcoded operations that may take multiple
cycles to complete
d.)WAP that can evaluate the expression (A*B) + (C*D) in a single accumulator processor?
Ans: LOAD A ; Load value of A into accumulator
MULTIPLY B ; Multiply accumulator by value of B
STORE TEMP ; Store intermediate result in a temporary memory location
LOAD C ; Load value of C into accumulator
MULTIPLY D ; Multiply accumulator by value of D
ADD TEMP ; Add the stored intermediate result (A*B) to accumulator
STORE RESULT ; Store the final result in memory or a register
e.)What is memory interleaving?
Ans: Memory interleaving is a technique used to enhance memory access performance in computer systems
by distributing memory addresses across multiple memory modules or banks. This approach allows for
concurrent access to multiple modules during read and write operations, reducing access latency and
improving overall system throughput.
f.)A computers memory is composed of 16K words of 32bits each.How many bits are required for memory
address if the smallest addresable memory unit is a word? What will be the total address space?
Ans:

Number of bits required for memory address:


Since the smallest addressable unit is a word, and each word has 32 bits, the number of bits required for
the memory address is equal to the number of bits needed to represent the word size.

Therefore, the address needs 32 bits.

Total address space:

Total address space = 2 ^ number of address bits

In this case:

• Number of address bits = 32 (as calculated earlier)

So, the total address space:

Total address space = 2 ^ 32 = 4,294,967,296 (words)

Therefore:

• Memory address bits: 32 bits


• Total address space: 4,294,967,296 words

g.) Why guard bits are required in floating point operation?


Ans: Guard bits in floating-point operations are extra bits added to the precision of numbers to ensure
accuracy during calculations. They help mitigate rounding errors that can occur due to limited precision in
floating-point representations
h.)Define temporal Locality and spatial Locality?
Ans: Temporal locality:refers to the principle that ->recently accessed memory locations are likely to
be accessed again in the near future.
Spatial locality:refers to the principle that ->memory locations near the currently accessed
location are likely to be accessed soon.
i)A computer system has a main memory consisting of 1M 16bit words.It also has a 4K word cache
organized in the block set associative manner,with 4 blocks per set and 64 words per block.Calculate the
no. of bits in each of tag,set and word fields?
Ans:
1. Total Address Bits (m):
To find the number of bits (m) required, we can use the formula: 2^m = total words
So, 2^m = 1,048,576
m = log2(1,048,576)
m ≈ 20 bits (approximately 20 bits are needed to address all words in main memory)
2. Word Offset (n):
Since each block contains 64 words (2^6 words), we need: n = log2(2^6) = 6 bits
3. Set Index (s):
the total number of blocks (B) is: B = Cache size / Block size = 4096/ 64 = 64 blocks
the total number of sets (S) is: S = Total blocks / Blocks per set = 64 / 4= 16 sets
To address these 16 sets, we need: s = log2(16) = 4 bits
4. Tag (t):
the number of tag bits (t) using the formula: t = m - n – s
t = 20-6-4= 10 bits
Therefore:
• Tag: 10 bits
• Set Index: 4 bits
• Word Offset: 6 bits
j.)Define the terms
a.)micro instruction b.)microprogram c.)control memory d.)control word
Ans:
a) Microinstruction: A low-level instruction used in microprogramming, controlling operations at a granular
level in a CPU.
b) Microprogram: A sequence of microinstructions defining the behavior of a CPU's control unit, enabling
complex control logic implementation.
c) Control Memory: Also called a control store, it stores microprograms, which are sequences of
microinstructions defining CPU control behavior.
d) Control Word: A binary word or set of bits used to control various operations in a computer system, often
used in microprogrammed control units to specify microinstructions or control signals.
2.a) illustrate the basic operational concepts in transferring data between main memory and processor
with neat diagram
Ans:
Components:
1) Main Memory (MM): Stores program instructions and data.
2) Processor (CPU): Contains the Control Unit (CU) and Arithmetic Logic Unit (ALU) for
processing.
3) Memory Address Register (MAR): Holds the address of the memory location to be accessed.
4) Memory Data Register (MDR): Holds the data being read from or written to memory.

Steps:
1. Fetch: The CU initiates the data transfer by sending the address of the desired data to the
MAR.The MAR address is then sent to the MM.
2. Read: The MM locates the data at the specified address.The data is retrieved from MM and placed
in the MDR.
3. Transfer: The MDR transfers the data to the appropriate register or processing unit within the CPU
(e.g., ALU) for further processing.

Diagram:
2.b.)What is byte addressing? Differntiate b/w big endian and little endian addresability
assignments.

Ans:

Byte addressing refers to the method by which individual bytes in computer memory are accessed and
manipulated. In byte addressing, each byte in memory is assigned a unique address, allowing the
processor to read from or write to specific bytes directly.

Big endian and little endian are two byte ordering schemes that dictate how multi-byte data types (such
as integers and floating-point numbers) are stored in memory. The difference lies in the order in which
bytes are arranged.

• Big Endian: In big endian, the most significant byte (MSB) is stored at the lowest memory
address, while the least significant byte (LSB) is stored at the highest memory address. It's like
reading a number from left to right, where the leftmost digit is the most significant.
• Little Endian: Conversely, in little endian, the least significant byte (LSB) is stored at the
lowest memory address, and the most significant byte (MSB) is stored at the highest memory
address. This ordering is akin to reading a number from right to left, with the rightmost digit
being the least significant.

To understand the difference in byte addressability assignments in these two schemes, consider a 4-byte
integer 0x12345678 stored in memory:

• Big Endian:
• Memory Address: 0x1000: 12
• Memory Address: 0x1001: 34
• Memory Address: 0x1002: 56
• Memory Address: 0x1003: 78
• Little Endian:
• Memory Address: 0x1000: 78
• Memory Address: 0x1001: 56
• Memory Address: 0x1002: 34
• Memory Address: 0x1003: 12

In big endian, the memory addresses increase from left to right as the significance of the bytes increases
(MSB to LSB). In little endian, the addresses increase from left to right, but the bytes are stored in
reverse order, with the least significant byte at the lower address.

3.a.)Why non restoring division method is faster than restoring division method?
Divide (1000)2 by (11)2 using non restoring method( also know abt restoring method)
Ans: Non-Restoring Division and Restoring Division are two algorithms used in computer arithmetic for dividing
one number by another. Non-Restoring Division can be faster than Restoring Division primarily because it
requires fewer iterations and has less overhead due to avoiding repeated restorations of the remainder during
the division process.

Problem: https://www.youtube.com/watch?v=ge09GjFUmKg

3.b.)Mention the sequence of control Stops required to perform the operation . Add[R3] ,R1 in a single bus
organization?

Ans:
In a single-bus organization, where only one operation can be performed at a time, the sequence of control steps
required to perform the operation Add[R3], R1 would typically involve the following steps:

1. Fetch Instruction (IF): The processor fetches the instruction Add[R3], R1 from memory into the
instruction register (IR).
2. Decode Instruction (ID): The instruction is decoded to determine the operation to be performed
(addition) and the operands involved (contents of memory location pointed to by R3 and register R1).
3. Read Operand 1 (RO1): The processor reads the contents of memory location pointed to by R3 to fetch
Operand 1.
4. Read Operand 2 (RO2): The processor reads the contents of register R1 to fetch Operand 2.
5. Perform Operation (OP): The processor performs the addition operation using Operand 1 and Operand
2.
6. Write Result (WR): The result of the addition operation is written back to the memory location pointed
to by R3.

5.a.)List various addressing modes,explain any three with an example of each ?

Ans: list of various addressing modes:

1. Immediate Addressing Mode


2. Register Addressing Mode
3. Direct Addressing Mode
4. Indirect Addressing Mode
5. Indexed Addressing Mode
6. Relative Addressing Mode
7. Base-Register Addressing Mode
8. Memory Indirect Addressing Mode
9. Scaled Index Addressing Mode
10. Stack Addressing Mode

I)Immediate Addressing Mode:


In this mode, the operand itself is specified directly within the instruction.
Example: MOV AX, 5 ; Load the immediate value 5 into register AX
II)Register Addressing Mode:
Here, operands are located in registers.
Example: ADD AX, BX ; Add the values in registers AX and BX, storing the result in AX
III)Direct Addressing Mode:
In this mode, the address of the operand is directly specified in the instruction.
Example: MOV AX, [5000] ; Load the value at memory address 5000 into register AX

5.b.)List some disadvantage of ripple carry adder.Design a 4 bit carry-look-ahead adder with diagram?

Ans: Disadvantages of Ripple carry adder:


1. The delay increases linearly with the number of bits, affecting speed.
2. Each stage depends on the carry from the previous stage, limiting parallelism.
3. Continuous carry signal switching leads to increased power usage.
4. Not suitable for high-speed operations due to the delay in carry propagation.
5. Long carry paths can restrict the maximum clock frequency.
6. Performance degrades with more bits, impacting efficiency.
7. Prone to errors from noise or glitches on carry lines.

Design: https://www.youtube.com/watch?v=7zJ5cR0GfxE

6.a.)Differniate between associative and set-associative cache mapping with examples?

Associative Mapping Set-Associative Mapping

Needs comparison with all tag bits, i.e., the cache


Needs comparisons equal to number of
control logic must examine every block’s tag for a
blocks per set as the set can contain more
match at the same time in order to determine that
than 1 blocks.
a block is in the cache/not.

Main Memory Address is divided into 1 fields : Main Memory Address is divided into 3
TAG & WORD. fields : TAG, SET & WORD.

The mapping of the main memory block


The mapping of the main memory block can be
can be done with a particular cache block
done with any of the cache block.
of any direct-mapped cache.

If the processor need to access same memory In case of frequently accessing two
location from 2 different main memory pages different pages of the main memory if
frequently, cache hit ratio has no effect. reduced, the cache hit ratio reduces.
Search time is more as the cache control logic Search time increases with number of
examines every block’s tag for a match. blocks per set.

The index is given by the number of sets in


The index is zero for associative mapping.
cache.

It has less tags bits than associative


It has the greatest number of tag bits. mapping and more tag bits than direct
mapping.

It is fast and Easy to implement


It gives better performance than the direct
and associative mapping techniques.

Expensive because it needs to store address along It is most expensive as with the increase in
with the data. set size cost also increases.

Example: Fully associative cache with no set Example: 2-way set-associative cache with
structure. sets and multiple cache lines per set.

6.b.)Represent the floating-point number, -0.012 in 32 bit IEEE format?

Ans: https://www.youtube.com/watch?v=d-dS9UmJM9I

7.a.)List out the different I/O transfer techniques cmd briefly explain how DMA is used for transferring data
fom peripherals?

Ans: The different I/O transfer techniques are:

1. Programmed I/O (PIO):


• In programmed I/O, the CPU directly controls data transfer between peripherals and memory.
• It involves the CPU issuing commands to the I/O device, waiting for the device to complete
the operation, and then transferring data between the device and memory.
2. Interrupt-Driven I/O:
• In interrupt-driven I/O, the I/O device interrupts the CPU when it needs attention or when an
operation is completed.
•Upon receiving an interrupt, the CPU suspends its current task, processes the I/O request, and
resumes the interrupted task after completing the I/O operation.
• This technique allows the CPU to perform other tasks while waiting for I/O operations to
finish, improving overall system efficiency.
3. DMA (Direct Memory Access):
• DMA is a technique where a dedicated DMA controller is used to transfer data directly
between peripherals and memory without CPU intervention.
• The DMA controller takes over control of the system bus temporarily to perform data
transfers independently of the CPU.
• The CPU sets up the DMA controller by providing it with the necessary transfer parameters
(source address, destination address, transfer size, etc.).
• Once configured, the DMA controller transfers data between the peripheral and memory
without involving the CPU until the transfer is complete or an error occurs.

How DMA is used for transferring data from peripherals:

1. Initialization: The CPU initializes the DMA controller by setting up parameters such as the
source address (peripheral), destination address (memory), transfer size, and transfer mode (e.g.,
read or write).
2. DMA Request: When an I/O device needs to transfer data, it sends a DMA request signal to the
DMA controller.
3. DMA Setup: Upon receiving the DMA request, the DMA controller takes control of the system
bus and initiates the data transfer based on the parameters set by the CPU.
4. Data Transfer: The DMA controller transfers data directly between the peripheral and memory
without CPU involvement, utilizing high-speed bus access and improving overall system
performance.
5. Completion and Notification: Once the data transfer is complete, the DMA controller signals
the CPU through an interrupt or other notification mechanism, allowing the CPU to process the
transferred data or perform follow-up actions as needed.

7.b.)Draw a diagram for the organization of a small chip consists of 16X8 memory Chip.Explain its working
principle?

Ans:Diagram:
Working Principle of a 16x8 Memory Chip:

1. Read Operation:
a. CPU sends address via address lines.
b. Asserts Read (R) control signal.
c. Memory chip activates corresponding memory location.
d. Data from that location sent back to CPU.
2. Write Operation:
a. CPU sends address and data via address and data lines.
b. Asserts Write (W) control signal.
c. Memory chip activates corresponding memory location.
d. Stores data sent by CPU in that location.
3. Memory Access Timing:
a. Memory chip responds within access time.
b. Access time includes activation, data transfer, and completion.
4. Address Decoding:
a. Address lines decoded to select memory location.
5. Data Transfer:
a. Data lines carry binary data during read/write.
b. Control signals guide chip's data transfer direction.

8.a.)Why cache replacement algo are required? Consider a fully associative cache with 8 cache blocks and the
following sequence of memory block requests are : 10,12,25,7,19,6,25,8,16,35,45,22,7,12,16,25,20,11. If LRU
Policy is used which cache block will have memory block 11?

Ans:

Cache replacement algorithms are essential in managing the contents of a cache memory, especially when the
cache is full and a new block needs to be fetched. These algorithms decide which block to replace in the cache
when a new block is requested and the cache is already full.

LRU works on the principle of replacing the least recently used block when a new block needs to be brought into
the cache.

analyzing the given sequence of memory block requests and determine which cache block will have memory
block 11 if the LRU policy is used:
1. Initial State (Cache is empty):
• Cache Blocks: [Empty, Empty, Empty, Empty, Empty, Empty, Empty, Empty]
• Memory Block 10 is requested: Cache Block 1 gets Memory Block 10.
2. State after Memory Block 10:
• Cache Blocks: [10, Empty, Empty, Empty, Empty, Empty, Empty, Empty]
3. Memory Block 12 is requested:
• Cache Blocks: [12, 10, Empty, Empty, Empty, Empty, Empty, Empty]
4. Memory Block 25 is requested:
• Cache Blocks: [25, 12, 10, Empty, Empty, Empty, Empty, Empty]
5. Memory Block 7 is requested:
• Cache Blocks: [7, 25, 12, 10, Empty, Empty, Empty, Empty]
6. Memory Block 19 is requested:
• Cache Blocks: [19, 7, 25, 12, 10, Empty, Empty, Empty]
7. Memory Block 6 is requested:
• Cache Blocks: [6, 19, 7, 25, 12, 10, Empty, Empty]
8. Memory Block 25 is requested again:
• Cache Blocks: [25, 6, 19, 7, 12, 10, Empty, Empty]
9. Memory Block 8 is requested:
• Cache Blocks: [8, 25, 6, 19, 7, 12, 10, Empty]
10. Memory Block 16 is requested:
• Cache Blocks: [16, 8, 25, 6, 19, 7, 12, 10]
11. Memory Block 35 is requested:
• Cache Blocks: [35, 16, 8, 25, 6, 19, 7, 12]
12. Memory Block 45 is requested:
• Cache Blocks: [45, 35, 16, 8, 25, 6, 19, 7]
13. Memory Block 22 is requested:
• Cache Blocks: [22, 45, 35, 16, 8, 25, 6, 19]
14. Memory Block 7 is requested again:
• Cache Blocks: [7, 22, 45, 35, 16, 8, 25, 6]
15. Memory Block 12 is requested again:
• Cache Blocks: [12, 7, 22, 45, 35, 16, 8, 25]
16. Memory Block 16 is requested again:
• Cache Blocks: [16, 12, 7, 22, 45, 35, 8, 25]
17. Memory Block 25 is requested again:
• Cache Blocks: [25, 16, 12, 7, 22, 45, 35, 8]
18. Memory Block 20 is requested:
• Cache Blocks: [20, 25, 16, 12, 7, 22, 45, 35]
19. Memory Block 11 is requested:
• Cache Blocks: [11, 20, 25, 16, 12, 7, 22, 45]

In this sequence of requests using the LRU policy, the cache block that will have memory block 11 is the first one.
8.b.)Write Short Notes on : i)SCSI ii) Interput-driven I/O

Small Computer System Interface(SCSI):

1. Purpose: SCSI is a standard interface used to connect computer peripherals like hard drives, tape drives,
scanners, and printers to a computer system.
2. Advantages:
• High-performance data transfer rates.
• Supports multiple devices on a single bus.
• Supports both parallel and serial implementations.
3. Types: SCSI can be implemented as SCSI-1, SCSI-2, SCSI-3, or SCSI-4, each version offering improved
features.
4. Components:
• Initiators: Devices that initiate SCSI commands .
• Targets: Devices that respond to SCSI commands.
• SCSI Bus: The physical connection medium that facilitates data transfer between initiators and targets.
• Controllers: Interface cards or chips that manage SCSI communication.
5. Protocol: SCSI uses a protocol for communication between initiators and targets, including command
phases, data phases, and status phases.

Interrupt-driven I/O:

1. Concept: Interrupt-driven I/O is a mechanism where I/O devices interrupt the CPU to signal completion of
operations or to request attention.
2. Operation:
• When an I/O operation is initiated, the CPU hand over’s control and continues other tasks.
• The I/O device performs the operation independently and interrupts the CPU upon completion or when
attention is needed.
3. Advantages:
• Efficient usage of CPU time ,as it performs other tasks while waiting for I/O operations to complete.
• Allows concurrent processing of multiple I/O requests.
4. Components:
• Interrupt Controller: Manages interrupts from various devices and prioritizes them for CPU handling.
• Device Drivers: Software components that facilitate communication between the CPU and I/O devices,
including handling interrupts.
5. Interrupt Handling:
• Upon receiving an interrupt, the CPU saves its current state, processes the interrupt request, and then
resumes the interrupted task.

You might also like