CO |Gate-2017| Previous Year Questions| Set-4

Set-15 GATE-2006 CO

CO |Gate-2017| Computer Organization

1. Consider the C struct defined below:

    struct data   {
        int marks [100];
        char grade;
        int cnumber;
    };
struct data student;

The base address of student is available in register R1. The field student.grade can be accessed efficiently using : [GATE – 2017]

a. Pre-decrement addressing mode, -(R1)
b. Post-increment addressing mode, (R1)+
c. Register direct addressing mode, R1
d. Index addressing mode, X(R1), where X is an offset represented in 2’s complement 16-bit representation

Answer : d)


2. Consider a two-level cache hierarchy with L1 and L2 caches. An application incurs 1.4 memory accesses per instruction on average. For this application, the miss rate of L1 cache is 0.1; the L2 cache experiences, on average, 7 misses per 1000 instructions. The miss rate of L2 expressed correct to two decimal places is __________. [GATE – 2017]

a. 0.05
b. 0.06
c. 0.07
d. 0.08

Answer : a)
CO |Gate-2017|


3. Consider a RISC machine where each instruction is exactly 4 bytes long. Conditional and unconditional branch instructions use PC-relative addressing mode with Offset specified in bytes to the target location of the branch instruction. Further the Offset is always with respect to the address of the next instruction in the program sequence. Consider the following instruction sequence

Instr. No.           Instruction

I     :             add R2, R3, R4
I+1 :             sub R5, R6, R7
I+2 :             cmp R1, R9, R10
I+3 :             beq R1, offset

If the target of the branch instruction is i, then the decimal value of the Offset is ___________. [GATE – 2017]

a. -15
b. -16
c. -17
d. -18

Answer : b)


4. Instruction execution in a processor is divided into 5 stages, Instruction Fetch (IF), Instruction Decode (ID), Operand Fetch (OF), Execute (EX), and Write Back (WB). These stages take 5, 4, 20, 10 and 3 nanoseconds (ns) respectively. A pipelined implementation of the processor requires buffering between each pair of consecutive stages with a delay of 2 ns. Two pipelined implementations of the processor are contemplated:

(i) a naive pipeline implementation (NP) with 5 stages and
(ii) an efficient pipeline (EP) where the OF stage is divided into stages OF1 and OF2 with execution times of 12 ns and 8 ns respectively.

The speedup (correct to two decimal places) achieved by EP over NP in executing 20 independent instructions with no hazards is _________. [GATE – 2017]

a. 1.51
b. 1.52
c. 1.53
d. 1.54

Answer : a)
CO |Gate-2017|


5. Consider a 2-way set associative cache with 256 blocks and uses LRU replacement. Initially the cache is empty. Conflict misses are those misses which occur due to contention of multiple blocks for the same cache set. Compulsory misses occur due to first time access to the block. The following sequence of accesses to memory blocks

     (0, 128, 256, 128, 0, 128, 256, 128, 1, 129, 257, 129, 1, 129, 257, 129)

is repeated 10 times. The number of conflict misses experienced by the cache is __________. [GATE – 2017]

a. 79
b. 82
c. 76
d. 80

Answer : c)


6. A cache memory unit with capacity of N words and block size of B words is to be designed. If it is designed as a direct mapped cache, the length of the TAG field is 10 bits. If the cache unit is now designed as a 16-way set-associative cache, the length of the TAG field is ___________ bits. [GATE – 2017]

a. 12
b. 13
c. 15
d. 14

Answer : d)


7. In a two-level cache system, the access times of L1 and L2 caches are 1 and 8 clock cycles, respectively. The miss penalty from the L2 cache to main memory is 18 clock cycles. The miss rate of L1 cache is twice that of L2. The average memory access time (AMAT) of this cache system is 2 cycles. The miss rates of L1 and L2 respectively are: [GATE – 2017]

a. 0.056 and 0.111
b. 0.111 and 0.056
c. 0.0892 and 0.1784
d. 0.1784 and 0.0892

Answer : b)


8. Consider a machine with a byte addressable main memory of 232 bytes divided into blocks of size 32 bytes. Assume that a direct mapped cache having 512 cache lines is used with this machine. The size of the tag field in bits is _____________. [GATE – 2017]

a. 18
b. 19
c. 20
d. 21

Answer : a)
CO |Gate-2017|


9. The read access times and the hit ratios for different caches in a memory hierarchy are as given below.

CacheRead access time (in nanosecond)Hit ration
I-cache20.8
D-cache20.9
L2-cache80.9

The read access time of main memory is 90 nanoseconds. Assume that the caches use the referred-word-first read policy and the write back policy. Assume that all the caches are direct mapped caches, Assume that the dirty bit is always 0 for all the blocks in the caches. In execution of a program, 60% of memory reads are for instruction fetch and 40% are for memory operand fetch. The average read access time in nanoseconds (up to 2 decimal places) is ___________. [GATE – 2017]

a. 4.72
b. 4.73
c. 4.74
d. 4.75

Answer : a)


Back to GATE-HOME


Spread the love

Leave a Comment

Your email address will not be published. Required fields are marked *