Algorithm Design Techniques
What are Algorithm Design Techniques?
Algorithm design techniques are general strategies or paradigms used to construct efficient algorithms. Rather than solving each problem from scratch, these techniques provide proven blueprints that can be adapted to a wide range of problems.
| Technique | Core Idea | Examples |
|---|---|---|
| Divide and Conquer | Split → Solve → Combine | Merge Sort, Quick Sort, Binary Search |
| Greedy | Always pick the locally optimal choice | Dijkstra, Kruskal, Huffman Coding |
| Dynamic Programming | Store solutions to overlapping subproblems | Fibonacci, Knapsack, LCS, Floyd-Warshall |
| Backtracking | Try all possibilities, undo bad choices | N-Queens, Sudoku, Maze solving |
| Branch and Bound | Backtracking + pruning with bounds | TSP, 0/1 Knapsack |
| Randomized | Use randomness to improve average performance | Randomized Quick Sort, Monte Carlo |
1. Divide and Conquer
Break the problem into smaller subproblems of the same type, solve them recursively, then combine the results. The recurrence is typically T(n) = aT(n/b) + f(n).
- Divide — split the problem into 2 or more subproblems
- Conquer — solve each subproblem recursively (base case: solve directly)
- Combine — merge the subproblem solutions into the final answer
When to use: When the problem can be split into independent subproblems of the same type and the combination step is efficient.
public class BinarySearch {
/*
* DIVIDE: Split array in half by computing mid
* CONQUER: Search only the relevant half
* COMBINE: No combine step needed — answer comes directly
*
* Recurrence: T(n) = T(n/2) + O(1) → O(log n)
*/
static int binarySearch(int[] arr, int lo, int hi, int target) {
if (lo > hi) return -1; // base case: not found
int mid = lo + (hi - lo) / 2; // DIVIDE: find midpoint
if (arr[mid] == target) return mid;
if (arr[mid] < target) // CONQUER: search right half
return binarySearch(arr, mid + 1, hi, target);
return binarySearch(arr, lo, mid - 1, target); // search left half
}
public static void main(String[] args) {
int[] arr = {2, 5, 8, 12, 16, 23, 38, 56, 72, 91};
System.out.println("Search 23: index " + binarySearch(arr, 0, arr.length-1, 23)); // 5
System.out.println("Search 10: index " + binarySearch(arr, 0, arr.length-1, 10)); // -1
}
}
2. Greedy Method
At each step, make the choice that looks best right now (locally optimal) without reconsidering past choices. Greedy works when the problem has the greedy choice property and optimal substructure.
When to use: Activity scheduling, minimum spanning trees, shortest paths (non-negative weights), Huffman coding.
import java.util.Arrays;
public class CoinChange {
// Greedy coin change — works for standard denominations {1,5,10,25}
// FAILS for arbitrary denominations like {1,3,4}
static void greedyCoinChange(int[] coins, int amount) {
Arrays.sort(coins); // sort ascending
System.out.println("Making change for " + amount + ":");
int total = 0;
for (int i = coins.length - 1; i >= 0 && amount > 0; i--) {
while (amount >= coins[i]) {
System.out.println(" Use coin: " + coins[i]);
amount -= coins[i];
total++;
}
}
System.out.println("Total coins used: " + total);
}
public static void main(String[] args) {
int[] coins = {1, 5, 10, 25};
greedyCoinChange(coins, 41);
// Use 25, 10, 5, 1 → 4 coins (optimal for standard denominations)
// Greedy FAILS here:
int[] badCoins = {1, 3, 4};
greedyCoinChange(badCoins, 6);
// Greedy: 4+1+1 = 3 coins
// Optimal: 3+3 = 2 coins ↠greedy misses this!
}
}
3. Dynamic Programming
Dynamic Programming (DP) solves problems by breaking them into overlapping subproblems and storing results to avoid recomputation. It requires two properties:
- Optimal substructure — optimal solution contains optimal solutions to subproblems
- Overlapping subproblems — same subproblems are solved multiple times
When to use: Optimization problems where brute force is exponential but subproblems repeat — Knapsack, LCS, shortest paths, edit distance.
import java.util.Arrays;
public class Fibonacci {
// Naive recursion — O(2^n) — recomputes same values
// fib(5) calls fib(3) twice, fib(2) three times, etc.
static long fibNaive(int n) {
if (n <= 1) return n;
return fibNaive(n - 1) + fibNaive(n - 2);
}
// DP Memoization (Top-Down) — O(n) time, O(n) space
// Store computed values to avoid recomputation
static long[] memo = new long[100];
static long fibMemo(int n) {
if (n <= 1) return n;
if (memo[n] != 0) return memo[n]; // return cached result
memo[n] = fibMemo(n - 1) + fibMemo(n - 2);
return memo[n];
}
// DP Tabulation (Bottom-Up) — O(n) time, O(n) space
// Build solution from base cases upward
static long fibDP(int n) {
if (n <= 1) return n;
long[] dp = new long[n + 1];
dp[0] = 0; dp[1] = 1;
for (int i = 2; i <= n; i++) dp[i] = dp[i-1] + dp[i-2];
return dp[n];
}
// Space-optimized DP — O(n) time, O(1) space
static long fibOpt(int n) {
if (n <= 1) return n;
long a = 0, b = 1;
for (int i = 2; i <= n; i++) { long c = a + b; a = b; b = c; }
return b;
}
public static void main(String[] args) {
Arrays.fill(memo, 0);
System.out.println("fib(10) naive: " + fibNaive(10)); // 55
System.out.println("fib(10) memo: " + fibMemo(10)); // 55
System.out.println("fib(10) dp: " + fibDP(10)); // 55
System.out.println("fib(50) opt: " + fibOpt(50)); // 12586269025
}
}
4. Backtracking
Backtracking explores all possible solutions by building candidates incrementally and abandoning (backtracking) a candidate as soon as it's determined it cannot lead to a valid solution. It's essentially a depth-first search with pruning.
When to use: Constraint satisfaction problems — N-Queens, Sudoku, maze solving, generating permutations/combinations.
import java.util.ArrayList;
import java.util.List;
public class Permutations {
static void permute(int[] arr, int start, List<String> result) {
if (start == arr.length) {
// Base case: all positions filled → record this permutation
StringBuilder sb = new StringBuilder("[");
for (int i = 0; i < arr.length; i++) {
sb.append(arr[i]);
if (i < arr.length - 1) sb.append(", ");
}
result.add(sb.append("]").toString());
return;
}
for (int i = start; i < arr.length; i++) {
// CHOOSE: swap arr[start] with arr[i]
int temp = arr[start]; arr[start] = arr[i]; arr[i] = temp;
// EXPLORE: recurse with next position
permute(arr, start + 1, result);
// UNCHOOSE: swap back (backtrack)
temp = arr[start]; arr[start] = arr[i]; arr[i] = temp;
}
}
public static void main(String[] args) {
int[] arr = {1, 2, 3};
List<String> result = new ArrayList<>();
permute(arr, 0, result);
System.out.println("All permutations of {1,2,3}:");
result.forEach(System.out::println);
System.out.println("Total: " + result.size() + " (= 3! = 6)");
}
}
/*
* Output:
* [1, 2, 3]
* [1, 3, 2]
* [2, 1, 3]
* [2, 3, 1]
* [3, 2, 1]
* [3, 1, 2]
* Total: 6 (= 3! = 6)
*/
5. Branch and Bound
Branch and Bound is an extension of backtracking that uses bounding functions to prune branches more aggressively. At each node in the search tree, it computes an upper/lower bound on the best possible solution in that subtree. If the bound is worse than the current best solution, the entire subtree is pruned.
- Branch — split the problem into subproblems (like backtracking)
- Bound — compute a bound on the optimal solution in each subproblem
- Prune — discard subproblems whose bound cannot improve the current best
Used for: Travelling Salesman Problem (TSP), 0/1 Knapsack, Job Scheduling.
Choosing the Right Technique
| Problem Type | Best Technique | Why |
|---|---|---|
| Sorting / Searching | Divide and Conquer | Problem splits naturally into independent halves |
| Shortest path, MST | Greedy | Greedy choice property holds |
| Optimization with overlapping subproblems | Dynamic Programming | Avoids exponential recomputation |
| Constraint satisfaction (N-Queens, Sudoku) | Backtracking | Must explore all possibilities with pruning |
| Combinatorial optimization (TSP) | Branch and Bound | Better pruning than pure backtracking |
| Average-case performance | Randomized | Avoids worst-case inputs |