In: Computer Science
Problem 2. Interrupt-driven I/O (20 pt.)
Consider a system employing interrupt-driven I/O for a particular device that transfers data at an average of 2000 bytes/sec on a continuous basis.
a. Assume that interrupt processing takes about 200 usec (i.e., the time to jump to the interrupt service routine (ISR), execute it, and return to the main program). Determine what fraction of processor time is consumed by this I/O device if it interrupts for every byte.
b. Now assume that the device has two 8-byte buffers and interrupts the processor when one of the buffers is full. Naturally, interrupt processing takes longer because the ISR must transfer 8 bytes. While executing the ISR, the processor takes about 10 usec for the transfer of each byte. Determine what fraction of processor time is consumed by this I/O device in this case.
For the given data, consider a system employing interrupt driven I/O for a particular device transfer data at an average is 2000 bytes/sec = 2000 interrupts per second.
It's converted to usec (1 =10^-6 sec)
Then,
1/(2000* 10^-6) = 500usec
a)
The device generates 200 usec. So time required for interrupt processing = 200 used.
Assume that the interrupt processing takes about 100 usec. If each interrupt consumes 100 usec , then the fraction of processor time = interrupt processing/ time interval for an interrupt
=100/200
= 0.5
b) In this case , the time interval between interrupt= 8 * 200 = 1600usec.
Each interrupt now requires 100 usec for the first character add the time for transferring each remaining character (8-1=7 characters)
That is 100*1+10*7= 170 sec.
Therefore, the fraction of the processor time for
= time required for interrupt processing/ interval between the interrupts
=170/1600
=0.10