- Quescol 1-Minute Read
- Key Points about Activity Selection Problem
- Let’s Understand in Depth
- What is Activity Selection Problem?
- Greedy Approach in Activity Selection Problem
- Algorithm Steps of Activity Selection Problem
- Step 1: Sort activities by their finish times in ascending order
- Step 2: Select the first activity from the sorted list
- Step 3: For each of the next activities, check its start time
- Step 4: If an activity’s start time is greater than or equal to the finish time of the last selected activity, select it
- Step 5: If the start time is less than the last selected activity’s finish time, skip it
- Step 6: Repeat this process until all activities have been considered
- Pseudocode of Activity Selection Problem
- Example of Activity Selection Problem
- Step 1: Sort Activities by Finish Time
- Step 2: Select the First Activity
- Step 3: Check the Next Activities One by One
- Final Selected Activities
- Greedy Approach Vs Other Approach
- 1. Dynamic Programming Approach
- 2. Backtracking / Brute Force
- 3. Divide and Conquer (Not common)
- Why Greedy is Best?
- Greedy vs Backtracking vs Divide & Conquer vs DP
- Why Greedy Algorithm Works for the Activity Selection Problem?
- 1. Greedy Choice Property
- Why this works?
- 2. Optimal Substructure Property
- In this problem:
- Therefore — Why It’s Correct?
- Advantages of Greedy Approach
- Limitations of Greedy Approach
- Conclusion
The Activity Selection Problem is one of the most famous examples of the Greedy Algorithm technique. It deals with a situation where we have a single person (or resource) who can perform only one activity at a time, and we are given a list of activities with their start and finish times.
The main aim is to select the maximum number of activities that do not overlap in time.
Quescol 1-Minute Read
The Activity Selection Problem is a classic example of the Greedy Algorithm technique. It focuses on selecting the maximum number of activities that can be performed by a single person or machine, given that no two activities overlap in time. Each activity has a start time and a finish time, and the main goal is to schedule as many non-overlapping activities as possible. The greedy choice here is to always pick the activity that finishes earliest, leaving maximum room for the remaining ones. This approach ensures efficiency and guarantees an optimal result.
Key Points about Activity Selection Problem
- Type: Greedy Algorithm problem.
- Goal: Select the maximum number of activities that do not overlap.
- Main Idea: Always pick the activity that finishes earliest and doesn’t overlap.
- Inputs: A list of activities with their start and finish times.
- Core Properties:
- Greedy Choice Property: Picking the earliest finishing activity leads to an optimal solution.
- Optimal Substructure: Remaining activities form a smaller version of the same problem.
- Steps Involved:
- Sort activities by finish time (ascending order).
- Select the first activity (earliest finish).
- Check each next activity’s start time against the last selected activity’s finish time.
- Select non-overlapping activities; skip overlapping ones.
- Final Result: A set of activities that can be done without time conflicts.
Let’s Understand in Depth
What is Activity Selection Problem?
The Activity Selection Problem is a greedy algorithm problem that helps us choose the maximum number of activities that can be done by a single person or machine, without overlapping in time.
In this problem, you are given a list of activities. Each activity has two things:
- A start time → when it begins
- A finish time → when it ends
The goal is to select as many activities as possible so that no two activities happen at the same time.
Greedy Approach in Activity Selection Problem
The greedy approach works by making the best local choice at each step — hoping it leads to the best global solution.
For this problem, the best local choice is to:
Always pick the activity that finishes earliest (among the remaining ones) and doesn’t overlap with previously selected activities.
This ensures that we leave as much remaining time as possible for the rest of the activities.
Algorithm Steps of Activity Selection Problem
Step 1: Sort activities by their finish times in ascending order
Arrange all activities so the one that ends earliest comes first, and the one that ends latest comes last.
Sorting by finish time allows us to always pick the activity that leaves the most room for upcoming activities.
Step 2: Select the first activity from the sorted list
The activity that finishes earliest is always a safe choice since it maximizes the remaining time available for other activities.
Step 3: For each of the next activities, check its start time
Go through the rest of the activities one by one and examine when each one begins.
Step 4: If an activity’s start time is greater than or equal to the finish time of the last selected activity, select it
This condition ensures no overlap between activities. If the activity fits after the previously selected one, include it.
Step 5: If the start time is less than the last selected activity’s finish time, skip it
This means the activity overlaps with the one already chosen, so it cannot be selected.
Step 6: Repeat this process until all activities have been considered
Once all activities are checked, the selected list represents the maximum number of non-overlapping activities possible.
Pseudocode of Activity Selection Problem
ActivitySelection(s[], f[], n)
1. Sort activities according to their finishing time
2. Select the first activity and print it
3. Set 'last' = 0 (index of last selected activity)
4. For i = 1 to n-1:
if s[i] ≥ f[last]:
Select activity i
last = i
Example of Activity Selection Problem
We are given a list of activities with their start and finish times:
| Activity | Start | Finish |
|---|---|---|
| A1 | 1 | 3 |
| A2 | 2 | 4 |
| A3 | 3 | 5 |
| A4 | 0 | 6 |
| A5 | 5 | 7 |
| A6 | 8 | 9 |
Our goal is to select the maximum number of activities that do not overlap in time that is, no two chosen activities should happen at the same time.
Step 1: Sort Activities by Finish Time
We arrange all activities in ascending order of their finish times (the one that finishes first comes first).
After sorting:
| Activity | Start | Finish |
|---|---|---|
| A1 | 1 | 3 |
| A2 | 2 | 4 |
| A3 | 3 | 5 |
| A4 | 0 | 6 |
| A5 | 5 | 7 |
| A6 | 8 | 9 |
(In this case, they are already sorted by their finish times.)
Step 2: Select the First Activity
We start by selecting the first activity (A1) because it finishes earliest (at time 3).
- Selected: A1 (1–3)
Step 3: Check the Next Activities One by One
Now we move to the next activity and check if its start time is greater than or equal to the finish time of the last selected activity.
- A2 (2–4):
Start time 2 < finish time of A1 (3) → Overlaps → Skip it - A3 (3–5):
Start time 3 ≥ finish time of A1 (3) → No overlap → Select it- Selected: A1 (1–3), A3 (3–5)
- A4 (0–6):
Start time 0 < finish time of A3 (5) → Overlaps → Skip it - A5 (5–7):
Start time 5 ≥ finish time of A3 (5) → No overlap → Select it- Selected: A1 (1–3), A3 (3–5), A5 (5–7)
- A6 (8–9):
Start time 8 ≥ finish time of A5 (7) → No overlap → Select it- Selected: A1 (1–3), A3 (3–5), A5 (5–7), A6 (8–9)
Final Selected Activities
The maximum number of non-overlapping activities that can be performed are:
- A1, A3, A5, A6
Greedy Approach Vs Other Approach
Below is the detailed comparisions on why Greedy Approach is best choice to solve the Activity Selection Problem.
1. Dynamic Programming Approach
Dynamic Programming (DP) can also solve the Activity Selection Problem.
How it works:
- Consider every pair of compatible activities.
- Try all possible ways of selecting activities between them.
- Build a DP table where
dp[i][j]stores the maximum number of activities that can be completed between activityiand activityj.
Complexity:
- Time: O(n²)
- Space: O(n²)
Why is DP not preferred?
- Greedy gives the optimal answer in O(n log n) (due to sorting) and sometimes O(n) if already sorted.
- DP is slower and unnecessary because the greedy-choice property holds perfectly.
2. Backtracking / Brute Force
This method tries every possible subset of activities.
How it works:
- Generate all subsets.
- Check which subset has no overlaps.
- Return the size of the largest valid subset.
Complexity:
- Time: O(2ⁿ)
- Very slow for large
n, only for understanding or very small inputs.
3. Divide and Conquer (Not common)
Some versions try splitting activities into smaller groups and solving them recursively.
But:
This still ends up similar to DP and is not efficient compared to greedy.
Why Greedy is Best?
The Activity Selection Problem satisfies two key greedy principles:
- Greedy-choice property:
Choosing the earliest finishing activity never blocks an optimal solution. - Optimal substructure:
The remaining activities form the same problem again.
Because of these, greedy works perfectly and gives the optimal solution.
Greedy vs Backtracking vs Divide & Conquer vs DP
| Approach | Optimal? | Time Complexity | Usefulness |
|---|---|---|---|
| Greedy | Yes | O(n log n) | Best & fastest |
| Dynamic Programming | Yes | O(n²) | Slow, unnecessary |
| Backtracking | Yes | O(2ⁿ) | Only for small cases |
| Divide & Conquer | Yes | Similar to DP | Rarely used |
Why Greedy Algorithm Works for the Activity Selection Problem?
To prove that the greedy approach works correctly for the Activity Selection Problem, we need to understand why choosing the activity that finishes earliest always gives the best possible result.
This can be explained using two main properties of greedy algorithms:
- Greedy Choice Property
- Optimal Substructure Property
Let’s understand each in detail
1. Greedy Choice Property
The greedy choice property means that making the best local choice at every step (without thinking about the entire problem) will still lead to the global best solution.
In this problem, the local best choice is to always pick the activity that finishes earliest among the remaining ones.
Why this works?
- When we pick the activity that finishes first, it leaves maximum remaining time for all the other activities that come after it.
- So, by selecting the earliest-finishing activity, we don’t block any possible future activities.
- Even if another activity starts early but finishes late, it reduces our chance to include more activities later — hence not an optimal choice.
2. Optimal Substructure Property
The optimal substructure property means that if we solve a problem optimally, then the remaining smaller part of the problem can also be solved optimally in the same way.
In this problem:
- Once we pick one activity (say it finishes at time
t),
the remaining problem is to select the maximum number of activities that start after timet. - This remaining part is exactly the same type of problem as the original one — just with a smaller set of activities.
- So, we can apply the same greedy strategy again to the remaining activities.
This proves that the problem can be divided into smaller, similar subproblems, each solvable using the same greedy rule.
Therefore — Why It’s Correct?
Since the Activity Selection Problem satisfies both properties —
- Greedy Choice Property: picking the earliest finishing activity is always safe,
- Optimal Substructure: the remaining activities form a smaller version of the same problem,
the greedy algorithm always produces the optimal solution — that is, it selects the maximum number of non-overlapping activities possible.
Advantages of Greedy Approach
- Simple and Easy to Understand
- The Greedy approach is based on a simple idea — make the best choice at every step. There’s no need for complex recursion or dynamic programming. This makes it much easier for beginners to understand and apply.
- Fast Execution
- Since the greedy algorithm only makes one pass through the list of activities after sorting, it runs very quickly. For example, after sorting the activities by finish time, it just checks each activity once — giving a time complexity of O(n log n) (because of sorting). This makes it very efficient for large input sizes.
- Requires Less Memory
- Unlike dynamic programming, greedy algorithms don’t store results of previous subproblems. They only keep track of the current solution, which means low space complexity (O(1) or O(n) depending on implementation). This makes them suitable for devices with limited memory.
- Produces Optimal Results (for Certain Problems)
- For problems like Activity Selection, Huffman Coding, or Fractional Knapsack, the greedy approach guarantees an optimal solution. This is because in these cases, the “local best” choice leads to the “global best” outcome.
- Easy to Implement and Debug
- The logic is straightforward — pick the best available option and move forward. Because of this, greedy algorithms require fewer lines of code and are less prone to logical errors. Debugging them is also simpler since each step is independent and easy to trace.
- Useful for Real-World Applications
- Many real-world scheduling and optimization problems use greedy strategies — such as job scheduling, resource allocation, or bandwidth management. Their speed and simplicity make them practical for quick decision-making systems.
Limitations of Greedy Approach
- Does Not Always Give the Optimal Solution
- The biggest drawback is that the greedy method only looks for the best local choice at each step without considering the overall picture. Because of this, it can sometimes end up with a solution that’s not globally optimal. For example, in the 0/1 Knapsack Problem, the greedy approach fails to give the correct answer.
- Problem-Specific Nature
- The greedy strategy doesn’t work for every problem. It works only when the problem satisfies Greedy Choice Property and Optimal Substructure. If these two properties aren’t present, the greedy method may give wrong or incomplete results.
- No Backtracking or Re-evaluation
Once a choice is made, the greedy algorithm never goes back to change it. This means if an early decision was wrong, the algorithm cannot correct it later — leading to suboptimal results. - Difficult to Identify Applicability
It’s not always clear whether a problem can be solved using a greedy approach. Determining if a problem satisfies the required properties (like optimal substructure) can be tricky and requires deep analysis. - Fails for Complex Problems
For problems that have multiple constraints or dependencies between decisions (like graph coloring or traveling salesman), the greedy approach often fails because it cannot handle complex trade-offs between steps. - Depends on Sorting or Specific Order
Many greedy algorithms require sorting the input first (for example, by finish time or value-to-weight ratio). This extra step increases preprocessing time and makes it less efficient in some cases compared to dynamic programming.
Conclusion
The Greedy Approach is a fast and simple method that makes the best choice at each step to find an overall optimal solution. It works well for problems like Activity Selection where local choices lead to the best global result. However, it may fail for problems that don’t have the Greedy Choice Property or Optimal Substructure.