What is Operating System?

  • Post last modified:14 November 2021
  • Reading time:21 mins read

What is Operating System?

Operating systems are system software that run on top of computer hardware. This definition needs to be observed from different perspectives of computer system, namely from application software and user hardware interaction. Operating system can be defined as, system program that monitors works of application programs.

It is responsible for the loading and execution of application programs. It has to make sure the availability of the required hardware and software requirements before executing the application program. Operating system can also be defined as system software which acts as a bridge between the hardware and its users.

The operating system, according to this definition, has a responsibility to hide the complexities of the underlying hardware for the end user. The end user is not supposed to know the details of hardware components like CPU, memory, disk drives, etc. As we have discussed the definitions of operating system so far, we can conclude that the operating system is found between user, application software, and physical component of the computer system.

It facilitates and monitors running of application programs and functions as a middle man between hardware and users.

Evolution of Operating Systems

Computers gone through different generations. The generations are classified based on varieties of criterion. One of the major criteria to classify computer generations is the type of operating system used. Evolution of operating systems has a close tie with the generation of computers. What were the basic changes made on operating systems over these generations ?

Studying evolution of Operating Systems is used to understand key requirements of an Operating System. Moreover, it helps to understand significance of the major features of modern Operating Systems. Operating systems and computer architecture are historically tied.

The combination of computer architecture with operating system is known as computer platform. Architectural changes affect structure and performance of Operating Systems. While studying the evolution of operating systems, consideration on the architecture is required.

Evolution of operating systems over the past decades has shown number of progresses from several perspectives. Starting from the point where machine does not have operating systems to a distributed processing capability. The foundation for the modern operating system was built over different parameters.

The following shows the major evolution of Operating Systems :

Wiring–up Plug–boards

Low level programming language called machine language was used to write computer instructions through wiring up plug–boards Plug–boards control the basic functions of the computer. No Operating Systems software were introduced. No programming languages to develop applications. No operators were required. Complex programming and machine under utilization.

Serial Processing

  • Developer creates his/her program and punches it on cards
  • He/she submits the card deck to an operator in the input room
  • he/she reserves machine time on a sign–up sheet
  • Operator set–up the job as scheduled
  • Operator caries over the returned output to the output room
  • The developer will collect the output later
  • No Operating Systems software were introduced. No programming languages to develop applications.


  • Users get access in series
  • Program writing was improved Disadvantages
  • Wasted time due to scheduling and setup
  • Wasted time while operators walk around the machine room
  • Large machine/processor idle time.

Batch Processing

The users of batch operating system do not interact with the computer directly. Each user prepares his job on an off–line device like punch cards and submits it to the computer operator. To speed up processing, jobs with similar needs are batched together and run as a group. Thus, the programmers left their programs with the operator. The operator then sorts programs into batches with similar requirements. The technique is to running a batch of Job.


  • Once the data process is started, the computer can be left running without supervision.
  • Batch processing allows an organization to increase efficiency because a large amount of transaction scan be combined into a batch rather than processing them each individually.


  • It is very difficult to maintain the priority between the batches.
  • There is no direct interaction of user with computer. Lack of
  • interaction between the user and job.
  • CPU is often idle, because the speeds of the mechanical I/O devices is slower than CPU.
  • Difficult to provide the desired priority.
  • With batch processing there is a time delay before the work is processed and returned.

Spooling Batch Processing Technique

  • Adds up Spooling technique to simple batch processing
  • Spooling – Simultaneous Peripheral Operation Online
  • It is the ability to read jobs/print job outputs while the processor executes jobs
  • Read jobs – from cards to disk
  • Print job outputs – from disk to printer
  • Used in input and output operations

With spooling, every time a currently executing task is completed, a new task is shipped from storage to the now–empty panel and be executed by the OS. Spooling is the first attempt of multi programming.


  • Avoids CPU idle time between batches of jobs
  • Improve turn–around time: Output of a job was available as soon as the job completed, rather than only after all jobs in the current cycle were finished.


• Large turn–around time

• Large CPU idle time on heavily I/O bound jobs


In a multi programming system there are one or more programs loaded in main memory which are ready to execute. Only one program at a time is able to get the CPU for executing its instructions (i.e., there is at most one process running on the system) while all the others are waiting their turn.

The main idea of multi programming is to maximize the use of CPU time. Indeed, suppose the currently running process is performing an I/O task (which, by definition, does not need the CPU to be accomplished)Then, the OS may interrupt that process and give the control to one of the other in–main–memory program that are ready to execute (i.e. process context switching).

In this way, no CPU time is wasted by the system waiting for the I/O task to be completed, and a running process keeps executing until either it voluntarily releases the CPU or when it blocks for an I/O operation. Therefore, the ultimate goal of multi programming is to keep the CPU busy as long as there are processes ready to execute. Note that in order for such a system to function properly, the OS must be able to load multiple programs into separate areas of the main memory and provide the required protection to avoid the chance of one process being modified by another one.

Other problems that need to be addressed when having multiple programs in memory is fragmentation as programs enter or leave the main memory. Another issue that needs to be handled as well is that large programs may not fit at once in memory which can be solved by using pagination and virtual memory.


Increase CPU utilization and reduce CPU idle time. • It decreases total read time needed to execute a job as the jobs are in main memory. Disadvantages of Multi Programming Operating System.


  • Long response time: the elapsed time to return back a result for a given job was often several hours.
  • Poor interactivity between programmer and his program.

Time Sharing

Multiple jobs are executed by the CPU by switching between them, but the switches occur so frequently. Thus, the user can receives an immediate response. For example, in a transaction processing, processor execute each user program in a short burst or quantum of computation.

That is if n users are present, each user can get time quantum. When the user submits the command, the response time is in few seconds at most. Operating system uses CPU scheduling and multi programming to provide each user with a small portion of a time. Computer systems that were designed primarily as batch systems have been modified to time– sharing systems.


  • More than one user can execute their task simultaneously.
  • Avoids duplication of software
  • CPU Idle time is reduced and better utilization of resources.
  • Provide advantage of quick response.


  • Question of securing the security and integrity of user’s data and programs.
  • Since multiple processes are managed simultaneously, so it requires an adequate management of main memory.
  • Problem of reliability.

Real–time processing

Real time system is defined as a data processing system in which the time interval required to process and respond to inputs is so small that it controls the environment. Real time processing is always on line whereas on line system need not be real time. The time taken by the system to respond to an input and display of required updated information is termed as response time. So, in this method response time is very less as compared to the online processing.

Real–time systems are used when there are rigid time requirements on the operation of a processor or the flow of data and real–time systems can be used as a control device in a dedicated application. Real–time operating system has well–defined, fixed time constraints otherwise system will fail.

For example, Scientific experiments, medical imaging systems, industrial control systems, weapon systems, robots, and home–appliance controllers, Air traffic control system etc. There are two types of real–time operating systems.

Hard real–time systems

Hard real–time systems guarantee that critical tasks complete on time. In hard real–time systems secondary storage is limited or missing with data stored in ROM. In these systems virtual memory is almost never found.

Soft real–time systems

Soft real time systems are less restrictive. Critical real–time task gets priority over other tasks and retains the priority until it completes. Soft real–time systems have limited utility than hard real–time systems. For example, Multimedia, virtual reality, Advanced Scientific Projects like undersea exploration and planetary rovers etc.


  • Better task scheduling as compared to manual process and time deadlines achievement is guaranteed in most of the cases
  • Accelerate the process by auto managing the system resources as in the case of Autopilot airplanes and railway e–ticket booking system.


  • If time dead lines are missed it may result in severe disastrous situation.
  • Complex and need additional kernel, Memory and other resources are required.
  • More vulnerable to security breaches like virus and unauthorized access.

Networked Processing

Comprise of several computers that are interconnected to one another. Each of these networked devices has their own local users and executes their own local OS not basically different from single computer OSs. Users also know about the presence of the several computers. Additional features needed by network operating systems are Network interface controller, NIC, A low–level software to drive it and software for remote login & remote file access.

Distributed Processing

A distributed system is a collection of physically separate, possibly heteroge­neous computer systems that are networked to provide the users with access to the various resources that the system maintains. Access to a shared resource increases computation speed, functionality, data availability, and reliability.

Some operating systems generalize network access as a form of file access, with the details of networking contained in the network interface’s device driver. Others make users specifically invoke network functions. Generally, systems contain a mix of the two modes–for example FTP and NFS. The protocols that create a distributed system can greatly affect that system’s utility and popularity.

Runs on a multi–computer system

set of computers, each having its own memory, storage devices and other I/O modules. This is useful for distributing the task between these different computers. Existence of multiple computers is transparent to the user: It appears as a uni processor system. It differs in critical ways from uni processor OS’s. Examples of distributed operating systems include LOCUS, MICROS, IRIX, Solaris, Mac/OS, and OSF/1.


  • With resource sharing facility user at one site may be able to use the resources available at another.
  • Speedup the exchange of data with one another via electronic mail.
  • If one site fails in a distributed system, the remaining sites can potentially continue operating.
  • Better service to the customers.
  • Reduction of the load on the host computer.
  • Reduction of delays in data processing


  • Security problem due to sharing.
  • Some messages can be lost in the network system.
  • Bandwidth is another problem if there is large data then all network wires to be replaced which tends to become expensive.
  • If there is a database connected on local system and many users accessing that database through remote or distributed way then performance become slow.
  • The databases in network operating is difficult to administrate then single user system.

General Categories of Operating System

In above sections, we have introduced the general structure of a typical computer system. We have also seen that how the peripheral devices are attached to the processor and I/O between external and internal storage. There are different categories of the operating system according to the use and utilities provided by architecture.

Desktop system

The program controls the machine itself and provide services to the user of the machine only is called desktop or laptop operating system. This operating system takes the control of the hardware and run the environment to provide services like – memory management, process management, device access and data handling. Many systems of this category provide security and data protection. Examples are windows 10, Ubuntu 18.04 LTS, Mac OS 10.14 (Mojave) etc.

Multiprocessor system

System with two or more than two processors is known as multiprocessor system. This system shares common bus, clock, memory and devices. According to Flynn’s classification, MISD (Multiple Instruction stream and Single Data stream) and MIMD (Multiple Instruction stream and Multiple Data stream) computers are of this category.

Distributed system and clustered system are also part of multiprocessor system. When data stream and instruction stream are increased it effect the following parameters :


Get more work done in less time. When more data and instruction stream are there, they can do the work simultaneously so the performance is increased. The speed–up ratio is increased for N processor work together but theoretically we are not getting increase according to number of processors. There are many reasons for that.

Economy scale

MISD and MIMD architecture can use cluster of the computers and processors, that sharing external devices. This can be a cost effective solution in many cases.

Failure of one processor will not affect other processing going on in the system, albeit load of the failed processor can be divided among other processors. This is user transparent mechanism so user don’t have to bother about failure and system will be more reliable.


Failure of one processor will not affect other processing going on in the system, albeit load of the failed processor can be divided among other processors. This is user transparent mechanism so user don’t have to bother about failure and system will be more reliable.

Distributed system

Distributed system is collection of physically separate, heterogeneous, and system that are networked to provide services. Sharable devices increase the speed of computation, data availability and reliability.

Clustered system

Cluster is usually used to provide high–availability services, when one of the nodes in the cluster is failed than other node takes charge of that node and start execution from where it is failed. There are mainly two types of clustered system– asymmetrical and symmetrical. In asymmetrical one node is standby mode so it can take the control of the failed node while symmetrical cluster all nodes are running simultaneously and monitor each other.

Real time system

Real time system are special system that works on special requirements. The unique feature of this system is to complete the task in given deadline and instant service to the requisition generated in the environment. If task is not finished or handled in the given time frame than it can cause the disaster. Major issue in the real time system is to develop a routine for proper scheduling of processes.

Handheld system

This category of the system includes PDA (Personal Digital Assistant) like tablets, cellular telephones etc. main issue of this system to manage data and application in limited amount of resources. Size of the device is small, processor rate is also slow, and amount of the memory is small to manage data.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.