Home > Parallel Systems: -

Parallel Systems: -



Name   : Rajvikram Singh

Course  : EECS 371 - Operating Systems

Due date : Sept 15, 2000



Q 1:  Explain the difference between a trap and an interrupt. Under what circumstances can each of them be generated? 


      An Interrupt is caused by hardware or software event - like a device ready with some input data or a device driver request. It is an asynchronous or exceptional event that automatically saves the current CPU status (Accumulator, PSW and all other registers) for later execution. The CPU then refers to the system interrupt vector table to jump to an appropriate ISR (Interrupt Service Routine). The ISR is responsible for taking care of servicing the request. 

   The Interrupts can be broadly classified into the following types according to the cause of their generation. 

    • I/O generated Interrupts.

         - Incoming data wanting to be read (eg. Network card or keyboard)

         - To indicate to the CPU that a particular job assigned to it is finished.

        - To indicate the initialization of a device (or readiness to accept commands)

    • Program Interrupt.

         - Divide by zero error.

         - Memory protection fault.

         - CPU Protection violation.

    • Other external Interrupts.

         - Interval Timer going off.

         - Operators Interrupt Button.

         - CPU-to-CPU communication Interrupt. 

            Hardware may trigger an interrupt at any time by sending a signal to the CPU, usually by the way of the system bus. Software may trigger an Interrupt by executing a special operation called a system call. 

            Software interrupts or Traps as they are called are initiated as system calls. A system call differs from a hardware interrupt in the fact that it is a function, which is called by an application to invoke a kernel service. The system call checks the arguments given by the application, builds a data structure to convey the arguments to the kernel, and then executes the Trap routine. When the system call executes the trap routine, the interrupt hardware saves the state of the user code, switches to the supervisor mode, and dispatches to the kernel routine that implements the requested service. The trap is given a relatively low priority compared to those assigned to the device interrupts since executing a system call on behalf of an application is less urgent than servicing a device controller before the latter’s FIFO queue overflows and loses data. 

      Traps can be generated as a result of any of the following events: -

        • A page fault for virtual memory is an exception that raises an Interrupt.
        • A call to open/close/read/write to a device.
        • To read a file from the file system

      Q 2:  What is a system call? Who makes it and how is it processed?


         System calls are functions, which provide an interface for the user level programs to invoke a kernel service. They can be thought of as SAPs (Service Access Points) for the kernel. The application programs can utilize the kernel services only through the system calls provided by the OS. The various system calls can be broadly classified into five major categories: process control, file manipulation, device manipulation, information maintenance and communication 

            Thus the user programs communicate with the operating system and request services from the Operating system by making System Calls. Corresponding to each system call is a library procedure that the user programs can call. The procedure puts the parameters of the system call in a specified place, such as the machine registers, and then issues a Trap instruction. 

         The mechanism for a system call involves building a data structure to convey the arguments generated by the application to the kernel and then issuing a Trap. When the system call executes the trap, the OS saves the state of the user code, switches to the supervisor mode, and dispatches the kernel routine that implements the requested service. When it is finished, the OS puts the status code in a register and executes a return from trap instruction, to return control back to the library procedure. The library procedure in turn returns back to the calling user program with the status as the return value for the function. 

      Q 3:  What is a system program? What is the relationship/difference between a system call and a system program? 


            System programs sit on top of the OS and before the applications programs in the logical hierarchy. These are very useful utilities provided to ease the task of system administration/management and program development.  

      Relationship : The system calls are closely related to system programs as system programs use system calls to provide their services. They use the services provided by the OS to provide the following classes of utilities: -

      • File Manipulation
      • Status information
      • File modification
      • Programming language support
      • Program loading and execution
      • Communications
      • Application programs.

      Difference : The system programs are generally provided as command-line instructions or services whereas the system calls are provided to the users as library calls. 

      Q 4: Explain the concept of a process image. Where does it exist and how is it structured ? 

      Ans: A process can be described as a program in execution. When a program has been loaded into the memory from the disk, it is no longer a passive string of bytes, but has been given a defined space in the system. The process space is allocated resources such as virtual memory or I/O as required. Since the process will be running concurrently with many other processes on the system, the OS provides for data structures in the process’ space to keep track of its condition. 

             This space can be defined as the process image and it exists in the system memory. Depending on the state of the process the image could either be in the physical memory or swapped out to disk. The process image is structured or made up of four different elements as shown:- 


      Program Code




      Process Control Block


      Program Data

      Process Image


      Program code : This is the part of the process containing the instructions to be executed

          Program data : This is the data, which has been hard-coded into the program. Eg. initialized  variables.

          Stack : This is the LIFO stack which is used by the process for passing parameters to functions and for storing addresses while jumping in and out of routines.

      Process Control

          Block (PCB) :  This is a table used by the OS used for storing process specific information such as process state, program counter and other CPU registers etc. 

          Q 5: List at least four reasons why a process may get swapped out of main memory to disk. Explain each reasons in a few sentences.

          Ans :

          Blocked Process: In this case the process is blocked on some resource and has been waiting for the resource to be freed. If it is blocked for more than a certain time and other processes are ready to run, then the process will be swapped out to the disk to make space for the ready processes. 

          Demand Paging:  This kind of a mechanism causes pages to be swapped in on requirement. Thus whenever an active process requests pages not present in memory, it may cause queued processes to be swapped out to disk in order to make space. 

          Due to Scheduler: Schedulers for OSs of multi-process systems are expected to be very efficient and thus have to ensure that when only those processes, which are ready to run, should be present in the main memory and other processes should be swapped out to disk. Thus the scheduler might automatically swap out a process. 

          Due to interrupts: An I/O interrupt or a trap may cause the present process to be switched and even swapped out so that the service routine, which is a higher priority task, may run on the CPU.  

          Thrashing: Thrashing is a phenomenon that wastes a lot of CPU time. It can be caused by certain processes that swap out active pages of other processes, which in turn have to swap out other active pages for loading their own pages. This can cause the CPU to spend a lot of time swapping in and out pages from the disk. In such a case one or more of the miscreant processes should be swapped out to disk to prevent thrashing. 

              Q 6 :   Explain the differences between a programmed I/O and a interrupt driven I/O. 


          • In case of programmed I/O the actual reading/writing of data to or from a device is delegated to a controller (like a DMA controller). This controller is capable of being programmed by the CPU to indicate how much data to transfer and where (in the main memory). The other type of programmed I/O involves polling the device continuously for data or some event. Thus the CPU is responsible for initiating/managing I/O directly with the device.

              Whereas in the case of a interrupt driven I/O, the actual data transfer/processing is done by the main CPU. Therefore in the case of programmed I/O, the CPU is free for doing other tasks when the I/O is happening. 

          • In programmed I/O whenever a device has data to transfer it sends a signal to the special-purpose processor, which in turns takes control of the memory bus from the main CPU and does the transfer itself. Or in case of polled I/O, the CPU determines (by checking status continuously) when to transfer the data.

              In case of a interrupt driven I/O, the main CPU itself is interrupted and dispatches a ISR (interrupt service routine) to service the device. 

          • Also in the case of a programmed I/O, the special-purpose processor responsible for it is aware of the device asking for I/O as the device communicates with it on a dedicated channel (like a DMA channel). Similarly with polled I/O, the CPU is aware of the device it is polling and knows how to behave with it on the occurrence of an event.

              But for an interrupt driven I/O, the system has to look up its interrupt vector table in order to select an appropriate ISR to call for the device. 

              Q 7 : When a system call requests I/O action, how are the device status blocks used and updated? When a hardware interrupt from a I/O device is received by the OS, how are the device status blocks used and updated? 

              Ans : 

                   [a] The OS is responsible for maintaining device status blocks for each device on the system. Since in a multi-process system, it is possible that many processes will now trying to access the same device, the device status block maintains the following information about the device: 

              • Device type
              • It’s addresses and other reserved resources like DMA, IRQ etc.
              • State (busy, idle, functioning etc)

                   Thus when a I/O action is requested, the kernel checks the device status block for the device for its status. If the device is busy, the OS may provide a spool in which to queue the requests. The OS may check the type of the device (like block or character) and may behave accordingly. If the device is not busy then the OS changes its state from idle to functioning. 

              [b]  An I/O device may interrupt the CPU when it needs service. In such a case the OS jumps to the appropriate service routine for that device. It then indexes into the I/O device table to determine the status of the device (so as to service the device accordingly). And it modifies the device status block to reflect the occurrence of the interrupt. And since most of the time the interrupt would indicate the completion of a process, the OS may want to change the status of the device from busy to idle. 

              Q 8 :  What is a parallel system ? What is a distributed system ? How does the role of a operating system for such a computer system differ from that for a traditional von-Neumann architecture machine ? 


              [a] Parallel Systems: -

                    Parallel systems are computer systems comprising of multiple processors designed to work in tandem to solve complex problems. The advantage of using a multi-processor system is apparent as N processes can run on M processors simultaneously. On the other hand on a uni-processor system the same N-processes will use the same CPU on a time-sharing basis. Though using M processors does not imply a performance enhancement by a factor of M, as there is a significant overhead involved in distributing the load and synchronizing the various parts of the problem running on different processors, there is a impressive increase in throughput. 

                    Apart from the apparent improvement in throughput, the secondary reason for using multiprocessor systems is reliability. Multiprocessor architectures can be designed in such a way that each processor has its own memory, bus, I/O etc. Thus in the event of failure by a single processor in a cluster of 100 processors will cause the performance of the machine to go down by approx. 1 percent. Such systems thus continue to function even in the event of a failure. 

              [b] Distributed Systems:

                    Distributed Systems are a kind of parallel processing systems, termed as loosely coupled parallel systems. The idea is to configure a group of independent computer, which are networked together, as a parallel computing machine. The idea came about as a typical organization has many computers networked together, which are idle at most of the time. By forming a distributed computing cluster using such a setup, where the machines on the network are given a part of the problem to solve, increases their individual CPU usage.

                    Unlike tightly coupled parallel systems like SMP machines, there is no specific hardware support required for a distributed system. Each node in the cluster can be a completely different computer (like a PC, minicomputer or a mainframe). 

              [c]   These systems differ a lot from the von Neumann architecture machine as traditional von Neumann architecture runs only one process at one time on a single processor. If many processes are to be run they are run sequentially. Thus the operating system is very much simplified.

                    In case of a multiprocessor system, since many processes are running simultaneously on many processors, the operating system is much complicated and is expected to provide the following features:-

              • Robust memory protection – since multiple processes will be using the same memory space.
              • Features like virtual memory and demand paging – to create the illusion of virtual machines for each of the processes
              • Strong CPU protection – to ensure that no process hogs the CPU unfairly.
              • Reliable File & I/O Management to enable multiple users/processes to use the shared storage media & other devices without contention.
              • Efficient process management and effective scheduling.
              • Process synchronization and IPC
              • Load sharing algorithms to efficiently distribute the load across the various CPUs.
              • Networking – Most modern machines are of little use if they work in isolation. Hence the OS should provide inherent support for communication protocols.

              Q 9:  Explain how memory protection is achieved by the operating system, possibly using the hardware support provided.

              Now explain how CPU protection is achieved by the Operating System, possibly using the hardware support provided? 


              [a] Memory protection is required to prevent various processes from accessing the memory space of other processes and other privileged OS areas like the interrupt vector tables. For this purpose a mechanism has to be incorporated to prevent the end user from writing to an arbitrary address in the memory. It can be implemented by the following ways : 

                    A general solution implemented with CPU support for this problem is to let the Operating System define a memory for each of the user processes. For e.g. with Intel’s x386 kind of architecture, the CPU provides registers & tables for storing the base of the memory space and its limit. Thus for every call to memory that the process makes, it is possible to find if any attempt was made by the process to access unauthorized memory. If the user attempts to read or write outside his allowed segment, a segmentation fault is generated and the control returns to the OS. Since this kind of a check is usually hard wired into the hardware of the Computer System, it can’t be switched off. 

                    Secondly the CPUs provided a privileged (kernel) and un-privileged (user) mode of operation. And the user processes can only execute with the CPU in the user-mode. In this mode they do not have permissions for changing the CPU registers defining the process memory space. Hence protection against malfunctioning processes is ensured. 

                    And finally the OS is expected to be designed in such a way that it provides limited and well-defined system calls. The processes can make use of the various OS services only through these system calls. A monitor program can ensure that the user program passes valid parameters to the kernel and hence ensures disciplined program flow. 

              [b] CPU protection : On the other hand CPU protection is achieved by the incorporating a timer interrupt in our design. This is used to ensure that the user program does not get stuck in an infinite loop and does not give back the control to the Operating System.

                   The OS makes sure that the timer is set to interrupt, before relinquishing control over to the process. When the count in the timer goes down to zero, it interrupt the present process and jumps to a OS routine. 

              10)  Explain the term "spooling" and how it improves the efficiency of computer usage. 

              Ans : Spooling (Simultaneous Peripheral Operation On-Line) is the storage of jobs in a buffer, usually a disk till the I/O devices are ready to access the information.

                    The CPU is capable of generating and submitting jobs to a device at a much higher rate than the rate at which the device can process it. For e.g., even a fast LaserJet printer takes several seconds to process a page. It would be clearly inefficient for the CPU to wait for the printer to get free so that it can submit the next job. To avoid this spooling is used. In case of spooling, jobs submitted by various processes are spooled (or queued) in a buffer, usually a disk file managed by the operating system. The printer reads the next job from the disk when it is ready. Thus faster devices like the CPU do not have to sit idle waiting for the slower devices to finish processing.  

                    Thus by using spooling we can overlap jobs and several jobs can be executed simultaneously. While one job is being read from the spool by the device another job can be sent for printing and a third job can undergo computation on the CPU. Hence efficiency is increased, as CPU is free for running processes than waiting on I/O.  

                    Further system spools provide services to enable the users and system administrators to view the queue and delete unwanted jobs before printing or suspend jobs when the printer is being serviced. The OS can also prioritize the jobs submitted in the queue to improve performance for critical tasks.

Search more related documents:Parallel Systems: -
Download Document:Parallel Systems: -

Set Home | Add to Favorites

All Rights Reserved Powered by Free Document Search and Download

Copyright © 2011
This site does not host pdf,doc,ppt,xls,rtf,txt files all document are the property of their respective owners. complaint#downhi.com