Problem Statement
What are Greedy Algorithms? Explain when they work and provide examples of problems that can and cannot be solved greedily.
Explanation
Greedy algorithms solve optimization problems by making the locally optimal choice at each step with the hope of finding a global optimum. Unlike dynamic programming which considers all possibilities, greedy algorithms commit to choices immediately without reconsidering them.
The greedy approach works when a problem exhibits two properties. First, greedy choice property means that a globally optimal solution can be arrived at by making locally optimal choices. At each step, we can make a choice that looks best at the moment without worrying about future consequences. Second, optimal substructure means that an optimal solution to the problem contains optimal solutions to subproblems.
The greedy algorithm strategy follows this pattern. Sort or organize the input if needed to identify the locally optimal choice at each step. Make the greedy choice based on some criterion like maximum value, minimum cost, earliest finish time, or highest ratio. Update the problem state after making the choice. Repeat until a solution is complete or no more choices are possible. Return the solution.
Advantages of greedy algorithms include simplicity as they are usually easy to understand and implement. They have efficiency with often linear or O of n log n time complexity. They use minimal space with typically O of 1 extra space. They are intuitive as the logic mirrors natural human decision-making.
Disadvantages include that they do not guarantee optimal solutions for all problems. They may get stuck in local optima missing the global optimum. They require proof of correctness which can be difficult. They cannot backtrack if a choice leads to suboptimal results.
Problems that CAN be solved optimally with greedy include Activity Selection where we select maximum non-overlapping activities by choosing earliest finish time. Huffman Coding creates optimal prefix codes by building tree from lowest frequency nodes. Dijkstra's Shortest Path finds shortest paths by always exploring nearest unvisited vertex. Fractional Knapsack maximizes value by taking items with highest value-to-weight ratio. Minimum Spanning Tree using Kruskal's or Prim's algorithm adds minimum weight edges avoiding cycles. Coin Change for certain coin systems like US coins where greedy gives optimal solutions.
Problems that CANNOT be solved optimally with greedy include 0 or 1 Knapsack where we cannot take fractions requiring DP. Coin Change for arbitrary coin systems like coins of 1, 3, 4 where greedy fails for amount 6. Traveling Salesman Problem where choosing nearest city repeatedly does not give optimal tour. Longest Path in a graph where greedy choices lead to local optima. Subset Sum where selecting largest elements first may miss optimal combinations.
To determine if greedy works for a problem, try the greedy approach and test with examples. Check if locally optimal choices lead to global optimum. Look for counterexamples where greedy fails. Prove the greedy choice property mathematically if possible. If greedy fails, consider dynamic programming or other approaches.
The Activity Selection problem is a classic greedy example. Given activities with start and finish times, select maximum non-overlapping activities. The greedy choice is to always pick the activity with the earliest finish time. This works because choosing earliest finish leaves maximum room for future activities. Sort by finish time, select first activity, then repeatedly select next activity that starts after previous finish.