0
Explore
0

Preemptive and Non-Preemptive Algorithm in Operating System

Updated on April 15, 2026

Preemptive Scheduling Algorithm

Preemptive scheduling is a CPU scheduling technique where the operating system can interrupt a currently running process to assign the CPU to another process with higher priority or urgency. This approach enables efficient CPU sharing among processes, improves responsiveness, and is commonly used in multitasking and real-time operating systems.

In preemptive systems, the OS takes the processor away from a running process.

If the algorithm assumes one of the following circumstances, it is a preemptive algorithm –

  • When a process transitions from the running state to the ready state.
  • When a process transitions from the wait state to the ready state.

Some algorithms of preemptive scheduling algorithms are Round Robin, shortest Remaining Time First, and priority scheduling. This scheduling algorithm is cost associated. In preemptive, low-priority processes have to suffer from starvation.

Preemptive Scheduling algorithm in operating system

How Preemptive Scheduling Algorithm Works?

  • A process running on the CPU can be interrupted and moved back to the ready queue.
  • A new or higher-priority process can preempt the currently executing one.
  • This leads to context switching.

Real-World Example:

Operating systems like Windows, Linux, and macOS use preemptive scheduling for multitasking.

Advantages of Preemptive Scheduling Algorithm

  • Responsiveness: Preemptive scheduling ensures that higher-priority tasks get the CPU when they need it, leading to faster response times, particularly important for real-time and interactive systems like modern operating systems (e.g., Windows, macOS, Linux).
  • Fairness: It prevents any single process from monopolizing the CPU, ensuring that all processes receive a fair share of CPU time and avoiding scenarios where low-priority tasks are perpetually delayed.
  • CPU Utilization: Can improve CPU utilization by ensuring that the CPU is not left idle while higher-priority or shorter tasks are waiting.
  • Flexibility: Allows the operating system to reconsider scheduling decisions and dynamically allocate CPU access to critical processes as they enter the ready queue.
  • Deadlock Prevention: Preemption can help prevent deadlocks by allowing the operating system to break resource dependencies that could lead to deadlocks. 

Disadvantages of Preemptive Scheduling Algorithm

  • Overhead: Frequent context switching (saving the state of the current process and loading the state of the next) introduces overhead, potentially reducing overall system efficiency.
  • Complexity: More complex to implement, requiring the operating system to manage process states and handle interrupts effectively.
  • Starvation: Lower-priority processes may suffer from starvation if high-priority processes repeatedly enter the ready queue and keep preempting them.
  • Concurrency Problems: Can cause issues if processes are preempted while accessing shared resources or variables, requiring careful handling to maintain data integrity.

Applications of Preemptive Scheduling

  • Real-Time Systems
    Used in systems like air traffic control where tasks must be handled immediately.
  • Time-Sharing Systems
    Multiple users share CPU efficiently (e.g., multitasking OS).
  • Interactive Applications
    Used in apps where quick response is needed (e.g., online gaming, web browsing).
  • Operating Systems like Windows OS and Linux
    These systems use preemptive scheduling for smooth multitasking.

Common Preemptive Algorithms are

  • Round Robin (RR)
  • Shortest Remaining Time First (SRTF)
  • Preemptive Priority Scheduling
  • Multilevel Feedback Queue

Non-preemptive Scheduling Algorithm

Non-preemptive scheduling is a CPU scheduling method where once a process starts execution, it runs to completion or until it voluntarily releases the CPU. The operating system does not interrupt the process. This approach is simple, predictable, and suitable for batch systems but can lead to longer waiting times for other processes.

Once a process enters the execution state, it cannot be brought forward until it has completed its allocation time. With non-preemptive OS, the process itself decides whether it wants to leave the processor. CPU can only be taken away from a process when it is terminated or blocked. A process can occupy the CPU as long as it wants.

If planning takes place only under the following circumstances, we say that the planning scheme is non-preemptive or cooperative –

  • When a process transitions from the running state to the waiting state.
  • When a process exits.

Algorithms based on non-preemptive scheduling are shortest job first and priority (non-preemptive in some conditions).  This scheduling algorithm is no cost associated. The process which has low burst time may suffer from starvation in non-preemptive scheduling algorithm.

Non-Preemptive Scheduling algorithm in operating system

How Non-preemptive Scheduling Algorithm Works?

  • The process retains CPU control until it finishes or Itself wanted to leave.
  • Other processes must wait in the ready queue.

Real-World Example

Used in embedded systems or simpler real-time systems where predictability is more important than responsiveness.

Advantages of Non-preemptive Scheduling Algorithm

  • Simplicity: Simpler to design, implement, and manage due to the absence of preemption and context switching.
  • Lower Overhead: Fewer context switches result in lower overhead compared to preemptive scheduling.
  • Predictable Execution: Processes run uninterrupted until they finish or voluntarily release the CPU, making execution times more predictable.
  • No Starvation (for the process currently running): Once a process starts, it will complete without being interrupted, preventing starvation for that specific process.
  • Deterministic: Non-preemptive scheduling is deterministic in nature, making the scheduling outputs easily predictable. 

Disadvantages of Non-preemptive Scheduling Algorithm

  • Poor Responsiveness: Higher-priority or shorter tasks may have to wait for a long time if a lengthy process is currently executing, leading to poor system responsiveness.
  • Inefficient Resource Utilization: Long-running processes can block shorter or urgent tasks, leading to underutilization of the CPU.
  • Less Fairness: A single process can potentially monopolize the CPU, delaying others and making the system seem less fair.
  • Difficulty with Priority and Real-time Tasks: Can make it challenging to handle priority scheduling and meet deadlines for real-time tasks, as a running process cannot be interrupted.
  • Deadlock Potential: If processes hold resources needed by others, and preemption isn’t possible, non-preemptive scheduling can increase the likelihood of deadlocks.
  • System Instability: A process caught in an infinite loop could crash the system, as other processes cannot interrupt its execution

Applications of Non-Preemptive Scheduling

  • Batch Processing Systems
    Used where jobs are processed in groups without user interaction.
  • Simple Embedded Systems
    Suitable for systems with fixed tasks and low complexity.
  • Offline Data Processing
    Example: payroll systems, report generation.
  • Manufacturing Systems
    Used in systems where tasks must complete once started.

Common Non-Preemptive Algorithms

  • First Come First Serve (FCFS)
  • Shortest Job First (SJF)
  • Non-Preemptive Priority Scheduling

Difference between Preemptive Scheduling and Non-Preemptive Scheduling

Parameter Preemptive SchedulingNon-Preemptive Scheduling
DefinitionThe OS can interrupt a running process and switch the CPU to another, even if the current process hasn’t finished its task.Once a process takes control of the CPU, it runs until it finishes or voluntarily releases the CPU.
InterruptionRunning processes can be interrupted at any time.Running processes cannot be interrupted; they complete their execution or voluntarily yield.
FlexibilityHighly flexible, allows high-priority tasks to take control immediately.Less flexible, new processes must wait for the current one to finish.
OverheadHigher due to frequent context switching.Lower, as context switches are less frequent.
CostMore resource-intensive due to the need to maintain the integrity of shared data.Less resource-intensive, does not require constant monitoring for data integrity during process execution.
CPU UtilizationGenerally higher, as the CPU can be dynamically allocated to different tasks, notes Naukri.com.Can be lower if a long process ties up the CPU while other tasks are waiting.
StarvationPossible, especially for low-priority processes if high-priority processes repeatedly enter the ready queue.A long-running process can cause other processes, even those with higher priority, to wait for extended periods.
ResponsivenessBetter suited for real-time and interactive systems due to immediate response to high-priority tasks.Can be less responsive, as urgent tasks may be delayed by the current process.
ExamplesRound Robin (RR), Shortest Remaining Time First (SRTF).First-Come, First-Served (FCFS), Shortest Job First (SJF).
ApplicabilityWidely used in modern multitasking operating systems like Windows, macOS, and Linux.Less common in interactive systems, suitable for batch processing where tasks run sequentially without user intervention.

Conclusion

Preemptive and non-preemptive scheduling algorithms are both important CPU scheduling techniques used in operating systems. Preemptive scheduling provides better responsiveness and multitasking by allowing processes to be interrupted, making it suitable for real-time and interactive systems. On the other hand, non-preemptive scheduling is simple and efficient for batch processing, where tasks run without interruption. Therefore, the choice between the two depends on system requirements such as speed, complexity, and type of workload.

FAQs

Q1. Which is better: Preemptive or Non-Preemptive Scheduling?

It depends on the use case. Preemptive scheduling is better for interactive, multitasking environments. Non-preemptive is ideal for simple, predictable systems like batch or embedded applications.

Q2. Does preemptive scheduling increase system overhead?

Yes, it involves frequent context switching, which adds overhead. However, the trade-off is better responsiveness and fairness among processes.

Q3. Can a non-preemptive system cause delays for shorter processes?

Yes. In non-preemptive scheduling, long processes may block shorter ones, leading to poor turnaround time and responsiveness.

Q4. Is starvation possible in preemptive scheduling?

Yes. In priority-based preemptive scheduling, low-priority processes can suffer from starvation if high-priority ones keep arriving. Aging is used to prevent this.

Q5. What are real-world examples of each type?

Preemptive scheduling is used in modern OSes like Windows and Linux.
Non-preemptive scheduling is common in simple embedded systems and older batch processing systems.

Conclusion

Preemptive and non-preemptive scheduling are two fundamental CPU scheduling strategies. Preemptive scheduling allows the CPU to switch between processes for better responsiveness and fairness, making it ideal for real-time and interactive systems. In contrast, non-preemptive scheduling is simpler and ensures predictable execution, best suited for batch or embedded systems. Understanding both methods helps in selecting the right scheduling approach based on system requirements, performance goals, and resource constraints.