COA GATE
COA GATE
COA GATE
Answer: 17
Explanation: Upload Soon
Assume that the content of the memory location 5000 is 10, and the content of the register R3 is 3000.
The content of each of the memory locations from 3000 to 3010 is 50. The instruction sequence starts
from the memory location 1000. All the numbers are in decimal format. Assume that the memory is
byte addressable. After the execution of the program, the content of memory location 3010 is _
Answer: 50
Explanation: Upload Soon
Answer: 17160
Explanation: Upload Soon
Answer: II
Explanation: Upload Soon
Answer: 80,000
Explanation: Upload Soon
Answer: 2
Explanation: Upload Soon
Assume that every MUL instruction is data-dependent on the ADD instruction just before it and every
ADD instruction (except the first ADD) is data- dependent on the MUL instruction just before it. The
Speedup is defined as follows:
Q8 | GATE 2020
Consider the following data path diagram:
Consider an instruction: R0 ← R1 + R2. The following steps are used to execute it over the given data
path. Assume that PC is incremented appropriately. The subscripts r and w indicate read and write
operations, respectively.
1. R2r, TEMP1r, ALUadd, TEMP2w
2. R1r, TEMP1w
3. PCr, MARw, MEMr
4. TEMP2r, ROw
5. MDRr, IRw
Which one of the following is the correct order of execution of the above steps?
i ➥ 2, 1, 4, 5, 3
ii ➥ 1, 2, 4, 3, 5
iii ➥ 3, 5, 1, 2, 4
iv ➥ 3, 5, 2, 1, 4
Answer: IV
Explanation: Upload Soon
Q9 | GATE 2020
Consider the following statements.
I. Daisy chaining is used to assign priorities in attending interrupts.
II. When a device raises a vectored interrupt, the CPU does polling to identify the source of the
interrupt.
III. In polling, the CPU periodically checks the status bits to know if any device needs its attention.
IV. During DMA, both the CPU and DMA controller can be bus masters at the same time.
Which of the above statements is/are TRUE?
ii ➥ I and IV only
iv ➥ III only
Answer: I
Explanation: Upload Soon
Answer: III
Explanation: Upload Soon
Answer: 14
Explanation: Upload Soon
i ➥ C800 to CFFF
ii ➥ DA00 to DFFF
iv ➥ C800 to C8FF
Answer: I
Explanation: Upload Soon
Answer: II
Explanation: Upload Soon
Answer: 160
Explanation: Upload Soon
ii ➥ I, II and III
Answer: II
Explanation: Upload Soon
i ➥ QTPRS
ii ➥ TRPQS
iii ➥ PTRSQ
iv ➥ QPTRS
Answer: IV
Explanation: Upload Soon
i ➥ (D – M) / (X – M)
ii ➥ (X – M) / (D – M)
iii ➥ (D – X) / (D – M)
iv ➥ (X – M) / (D – X)
Answer: II
Explanation: Upload Soon
Answer: 59 To 60
Explanation: Upload Soon
i ➥ P – N – log2K
ii ➥ P – N + log2K
iii ➥ P – N – M – W – log2K
iv ➥ P – N – M – W + log2K
Answer: II
Explanation: Upload Soon
Answer: 219
Explanation: Upload Soon
Answer: 32
Explanation: Upload Soon
The base address of student is available in register R1. The field student.grade can be accessed
efficiently using
i ➥ Index addressing mode, X(R1), where X is an offset represented in 2’s complement 16-bit
representation
Answer: I
Explanation: Upload Soon
Answer: 0.05
Explanation: Upload Soon
Answer: 14
Explanation: Upload Soon
(0, 128, 256, 128, 0, 128, 256, 128, 1, 129, 257, 129, 1, 129, 257, 129)
is repeated 10 times. The number of conflict misses experienced by the cache is _________.
Answer: 76
Explanation: Upload Soon
The speedup (correct to two decimal places) achieved by EP over NP in executing 20 independent
instructions with no hazards is _______.
Answer: 1.52
Explanation: Upload Soon
If the target of the branch instruction is i, then the decimal value of the Offset is ________.
Answer: -16
Explanation: Upload Soon
Answer: I
Explanation: Upload Soon
The read access time of main memory is 90 nanoseconds. Assume that the caches use the referred-
word-first read policy and the write back policy. Assume that all the caches are direct mapped caches.
Assume that the dirty bit is always 0 for all the blocks in the caches. In execution of a program, 60% of
memory reads are for instruction fetch and 40% are for memory operand fetch. The average read
access time in nanoseconds (up to 2 decimal places) is _________.
Answer: 4.72
Explanation: Upload Soon
Answer: 18
Explanation: Upload Soon
Answer: 31
Explanation: Upload Soon
Answer: 456
Explanation: Upload Soon
Answer: 33 to 34
Explanation: Upload Soon
Answer: 16
Explanation: Upload Soon
Answer: 28
Explanation: Upload Soon
Answer: 500
Explanation: Upload Soon
Answer: 24
Explanation: Upload Soon
Answer: 4
Explanation: Upload Soon
ii ➥ Either S2 or S3
Answer: I
Explanation: Upload Soon
Answer: 14020
Explanation: Upload Soon
Answer: 3.2
Explanation: Upload Soon
Answer: 14
Explanation: Upload Soon
Answer: 13
Explanation: Upload Soon
i ➥ (016A)16
ii ➥ (016C)16
iii ➥ (0170)16
iv ➥ (0172)16
Answer: IV
Explanation: Upload Soon
Answer: 22
Explanation: Upload Soon
i ➥ E, 201
ii ➥ F, 201
iii ➥ E, E20
iv ➥ 2, 01F
Answer: I
Explanation: Upload Soon
S3: Within an instruction pipeline an anti-dependence always creates one or more stalls.
Which one of above statements is/are correct?
i ➥ Only S1 is true
ii ➥ Only S2 is true
Answer: I
Explanation: Upload Soon
Answer: 3
Explanation: Upload Soon
Answer: 16383
Explanation: Upload Soon
Answer: 4
Explanation: Upload Soon
i ➥ n/N
ii ➥ 1/N
iii ➥ 1/A
iv ➥ k/n
Answer: I
Explanation: Upload Soon
More Discussion Explanation On YouTube cache Help-Line
Answer: 1.6
Explanation: Upload Soon
Answer: 20
Explanation: Upload Soon
ii ➥ A smaller block size implies a smaller cache tag and hence lower cache tag overhead
iii ➥ A smaller block size implies a larger cache tag and hence lower cache hit time
Answer: IV
Explanation: Upload Soon
Answer: 10000
Explanation: Upload Soon
i ➥ p1
ii ➥ p2
iii ➥ p3
iv ➥ p4
Answer: III
Explanation: Upload Soon
Answer: 1.68
Explanation: Upload Soon
Answer: I
Explanation: Upload Soon
i ➥ Instruction fetch
ii ➥ Operand fetch
Answer: IV
Explanation: Upload Soon
i ➥ 1281
ii ➥ 1282
iii ➥ 1283
iv ➥ 1284
Answer: IV
Explanation: Upload Soon
More Discussion Explanation On YouTube Secondary Memory Help-Line
i ➥ 132
ii ➥ 165
iii ➥ 176
iv ➥ 328
Answer: II
Explanation: Upload Soon
i➥4
ii ➥ 5
iii ➥ 6
iv ➥ 7
Answer: II
Explanation: Upload Soon
Suppose the instruction set architecture of the processor has only two registers. The only allowed
compiler optimization is code motion, which moves statements from one place to another while
preserving correctness. What is the minimum number of spills to memory in the compiled code?
i➥0
ii ➥ 1
iii ➥ 2
iv ➥ 3
Answer: II
Explanation: Upload Soon
What is the minimum number of registers needed in the instruction set architecture of the processor
to compile this code segment without any spill to memory? Do not apply any optimization other than
optimizing register allocation.
i➥3
ii ➥ 4
iii ➥ 5
iv ➥ 6
Answer: II
Explanation: Upload Soon
Answer: III
Explanation: Upload Soon
i ➥ 11
ii ➥ 14
iii ➥ 16
iv ➥ 27
Answer: III
Explanation: Upload Soon
i ➥ 160 Kbits
ii ➥ 136 Kbits
iii ➥ 40 Kbits
iv ➥ 32 Kbits
Answer: I
Explanation: Upload Soon
Answer: IV
Explanation: Upload Soon
i ➥ Immediate Addressing
ii ➥ Register Addressing
Answer: IV
Explanation: Upload Soon
Assume that each statement in this program is equivalent to machine instruction which takes one
clock cycle to execute if it is a non-load/store instruction. The load-store instructions take two clock
cycles to execute.
The designer of the system also has an alternate approach of using DMA controller to implement the
same transfer. The DMA controller requires 20 clock cycles for initialization and other overheads. Each
DMA transfer cycle takes two clock cycles to transfer one byte of data from the device to the memory.
What is the approximate speedup when the DMA controller based design is used in place of the
interrupt driven program based input-output?
i ➥ 3.4
ii ➥ 4.4
iii ➥ 5.1
iv ➥ 6.7
Answer: I
Explanation: Upload Soon
What is the approximate speed up of the pipeline in steady state under ideal conditions when
compared to the corresponding non-pipeline implementation?
i ➥ 4.0
ii ➥ 2.5
iii ➥ 1.1
iv ➥ 3.0
Answer: II
Explanation: Upload Soon
What is the total size of memory needed at the cache controller to store meta-data (tags) for the
cache?
i ➥ 4864 bits
ii ➥ 6144 bits
iv ➥ 5376 bits
Answer: IV
Explanation: Upload Soon
i ➥ 0.50 s
ii ➥ 1.50 s
iii ➥ 1.25 s
iv ➥ 1.00 s
Answer: II
Explanation: Upload Soon
i ➥ 13
ii ➥ 15
iii ➥ 17
iv ➥ 19
Answer: II
Explanation: Upload Soon
i➥2
ii ➥ 3
iii ➥ 4
iv ➥ 6
Answer:
Explanation: Upload Soon
When there is a miss in L1 caches and a hit in L2 cache, a block is transferred from L2 cache to L1
cache, what is the time taken for this transfer ?
i ➥ 2 nanoseconds
ii ➥ 20 nanoseconds
iii ➥ 22 nanoseconds
iv ➥ 88 nanoseconds
Answer: III
Explanation: Upload Soon
When there is a miss in L1 cache and a hit in L2 cache, a block is transferred from L2 cache to L1
cache. What is the time taken for these transfer?
i ➥ 222 nanoseconds
ii ➥ 888 nanoseconds
iv ➥ 968 nanoseconds
Answer: III
Explanation: Upload Soon
Theory of Computation
Networking
Algorithms
Discrete Structures
Computer Graphics
Operating System
DBMS
Data Structures
Compiler Design
Software Engineering
Artificial Intelligence
Programming Languages
Microprocessor
Data mining
Data Structure
Algorithms
UGC-NET Nov-II-2017
UGC-NET Jan-III-2017
UGC-NET Jan-II-2017
UGC NET-Aug-III-2016
Practice Set
Compiler Design
Theory of Computation
Software Engineering
Cloud Computing
Unix
Networking
Artificial Intelligence
Algorithms
Formula Drive
Study Materiel
Practice Set
Analysis
GATE
Study Material
Analysis
Formula Drive
University
MSC CSIT
My CBSE School
Class XII
Informatics Practices
Computer Science
Class XI
Informatics Practices
Computer Science
Languages
C Language
C++
JAVA
Python
KVS PGT CS
Practice Set
Weekly Test
Copyright © 2024 SamagraCS
Educational Technology
Connect With Us :
samagracs.com@gmail.com
Address - Raipur Chhattisgarh India 492001
Search …
This site uses Google AdSense ad intent links. AdSense automatically generates these links and they may help creators earn money.