Question

In: Computer Science

OS(Operating Systems) [3] (a) Define DMA (Direct Memory Access).       (b) An input device can transmit 100...

OS(Operating Systems)

[3] (a) Define DMA (Direct Memory Access).

      (b) An input device can transmit 100 characters every 4 mils 10 , if the CPU needs 2 mics to service the

            Interrupt. How much time is left for asynchronous I/O. What if it can transmit 1000 characters

            every 4 mils and the CPU also needs 2 mics for the service routine.

            What do you think? Explain your answer in details.

[4]  (a) What is the interrupt types? Give examples.

       

        (b) What type of interrupt the following pieces of code in C might generate. Explain your answer.

(i)

int n = 0;

while (n = 0)

    printf(“Hello”);

(ii)

int n = 1 , m = 0, x = 10;

while (n/!x<0.001)

     { m +=x;

       n++;

     }

       [5] Draw clearly explaining briefly what is happening the following:

  1. Interrupt Vector
  2. Instruction Cycle with instruction cash register.
  3. System Queues and Scheduling.

Solutions

Expert Solution

3. A) Direct memory access (DMA):

It is a method that allows an input/output (I/O) device to send or receive data directly to or from the main memory, bypassing the CPU to speed up memory operations.

The process is managed by a chip known as a DMA controller (DMAC)

B)

In computer science, asynchronous I/O (also non-sequential I/O) is a form of input/output processing that permits other processing to continue before the transmission has finished.Input and output (I/O) operations on a computer can be extremely slow compared to the processing of data. An I/O device can incorporate mechanical devices that must physically move, such as a hard drive seeking a track to read or write; this is often orders of magnitude slower than the switching of electric current. For example, during a disk operation that takes ten milliseconds to perform, a processor that is clocked at one gigahertz could have performed ten million instruction-processing cycles.A simple approach to I/O would be to start the access and then wait for it to complete. But such an approach (called synchronous I/O, or blocking I/O) would block the progress of a program while the communication is in progress, leaving system resources idle. When a program makes many I/O operations (such as a program mainly or largely dependent on user input), this means that the processor can spend almost all of its time idle waiting for I/O operations to complete.

Many operating system functions exist to implement asynchronous I/O at many levels. In fact, one of the main functions of all but the most rudimentary of operating systems is to perform at least some form of basic asynchronous I/O, though this may not be particularly apparent to the user or the programmer. In the simplest software solution, the hardware device status is polled at intervals to detect whether the device is ready for its next operation. (For example, the CP/M operating system was built this way. Its system call semantics did not require any more elaborate I/O structure than this, though most implementations were more complex, and thereby more efficient.) Direct memory access (DMA) can greatly increase the efficiency of a polling-based system, and hardware interrupts can eliminate the need for polling entirely.

4. A) Interrupt:

Interrupt is a signal which has highest priority from hardware or software which processor should process its signal immediately.

Types of Interrupts:

Although interrupts have highest priority than other signals, there are many type of interrupts but basic type of interrupts are

  1. Hardware Interrupts: If the signal for the processor is from external device or hardware is called hardware interrupts. Example: from keyboard we will press the key to do some action this pressing of key in keyboard will generate a signal which is given to the processor to do action, such interrupts are called hardware interrupts. Hardware interrupts can be classified into two types they are
    • Maskable Interrupt: The hardware interrupts which can be delayed when a much highest priority interrupt has occurred to the processor.
    • Non Maskable Interrupt: The hardware which cannot be delayed and should process by the processor immediately.
  2. Software Interrupts: Software interrupt can also divided in to two types. They are
    • Normal Interrupts: the interrupts which are caused by the software instructions are called software instructions.

B) 1.) A hardware interrupt is often created by an input device such as a mouse or keyboard. For example, if you are using a word processor and press a key, the program must process the input immediately. Typing "hello" creates five interrupt requests, which allows the program to display the letters you typed

2.) Software interrupts are used to handle errors and exceptions that occur while a program is running. For example, if a program expects a variable to be a valid number, but the value is null, an interrupt may be generated to prevent the program from crashing. It allows the program to change course and handle the error before continuing. Similarly, an interrupt can be used to break an infinite loop, which could create a memory leak or cause a program to be unresponsive.

5. A) Interrupt vectors:

The interrupt vectors and vector table are crucial to the understanding of hardware and software interrupts. Interrupt vectors are addresses that inform the interrupt handler as to where to find the ISR (interrupt service routine, also called interrupt service procedure). All interrupts are assigned a number from 0 to 255, with each of these interrupts being associated with a specific interrupt vector.The interrupt vector table is normally located in the first 1024 bytes of memory at addresses 000000H–0003FFH. It contains 256 different interrupt vectors. Each vector is 4 bytes long and contains the starting address of the ISR. This starting address consists of the segment and offset of the ISR. .

2.)The instruction cycle (also known as the fetch–decode–execute cycle, or simply the fetch-execute cycle) is the cycle that the central processing unit (CPU) follows from boot-up until the computer has shut down in order to process instructions. It is composed of three main stages: the fetch stage, the decode stage, and the execute stage.

This is a simple diagram illustrating the individual stages of the fetch-decode-execute cycle.

In simpler CPUs, the instruction cycle is executed sequentially, each instruction being processed before the next one is started. In most modern CPUs, the instruction cycles are instead executed concurrently, and often in parallel, through an instruction pipeline: the next instruction starts being processed before the previous instruction has finished, which is possible because the cycle is broken up into separate steps

C.) Ready Queue is divided into separate queues for each class of processes. For example, let us take three different types of process System processes, Interactive processes and Batch Processes. ... For example, queue 1 and queue 2 uses Round Robin while queue 3 can use FCFS to schedule there processe


Related Solutions

Explain why an operating system can be viewed as a resource allocator. Direct memory access is...
Explain why an operating system can be viewed as a resource allocator. Direct memory access is used for high-speed I/O devices in order to avoid increasing the CPU’s execution load. a) How does the CPU interface with the device to coordinate the transfer? b) How does the CPU know when the memory operations are complete? c) The CPU is allowed to execute other programs while the DMA controller is transferring data. Does this process interfere with the execution of the...
a) Explain how performance can be controlled in virtual memory. (b) How does the OS implement...
a) Explain how performance can be controlled in virtual memory. (b) How does the OS implement dynamic allocation of frames? c) Explain the main difficulty in using and/or managing virtual memory.
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT