Operating System - Multiprocessor Scheduling
What is Multiprocessor Scheduling?
Multiprocessor Scheduling is a CPU scheduling method used in systems with more than one processor (CPU core). The goal is to efficiently assign processes to multiple processors so that the system performs better and faster.
In single-processor systems, only one process runs at a time.
In multiprocessor systems, multiple processes can run in parallel, so we need a strategy to:
-
Distribute workload fairly across processors
-
Maximize CPU utilization
-
Minimize process waiting time and turnaround time
Types of Multiprocessor Systems
1. Symmetric Multiprocessing (SMP)
-
All processors are identical.
-
Share memory and run the same OS.
-
Each processor can schedule any process from the common ready queue.
Most common in modern systems (e.g., multicore CPUs).
2. Asymmetric Multiprocessing (AMP)
-
One master processor controls all scheduling and I/O.
-
Other processors (slaves) only execute processes as directed.
Simpler but less efficient; used in some embedded systems.
Scheduling Approaches
1. Centralized Scheduling
-
Single scheduler manages all processors.
-
Processes are in a shared ready queue.
-
OS assigns processes to whichever processor is free.
Pros: Simple, fair load distribution
Cons: Bottlenecks if many processors
2. Distributed Scheduling
-
Each processor has its own scheduler and ready queue.
-
Processor schedules from its own queue.
Pros: Scalable
Cons: Load imbalance possible
3. Load Balancing
-
Ensures that all processors have roughly equal work.
-
Can be push-based (central scheduler pushes tasks) or pull-based (idle CPU pulls tasks).
4. Processor Affinity (CPU Affinity)
-
A process prefers to run on the same CPU it ran on previously (to use cache memory efficiently).
-
Types:
-
Soft affinity: OS tries to keep process on the same CPU
-
Hard affinity: Process is bound to a specific CPU
-
Example Scenario
Let’s say we have:
-
3 processes: P1 (burst=4), P2 (burst=6), P3 (burst=3)
-
2 processors: CPU1 and CPU2
Centralized Scheduling (Shared Queue)
-
Time 0: P1 → CPU1, P2 → CPU2
-
When P1 finishes (at time 4), assign P3 to CPU1
Result:
-
CPU1: P1 → P3
-
CPU2: P2
Advantages of Multiprocessor Scheduling
Faster execution through parallelism
Better CPU utilization
Improved responsiveness for users
Can run more processes simultaneously
Challenges
-
Load imbalance
-
Complex scheduling logic
-
Contention for shared resources (e.g., memory)
-
Cache coherence and processor affinity management