← Back to Blog

Core Algorithms and Patterns Part 3

July 26, 2025

Table of Contents


Core Algorithmic Techniques Part 3

Greedy Algorithms

Definition:

A Greedy Algorithm is a problem-solving approach that makes the locally optimal choice at each stage of a problem. The core idea is to build up a solution piece by piece, and at each step, make the choice that seems best at that moment without considering future consequences or backtracking. This strategy does not work for all problems, but for those where it does (problems with "optimal substructure" and the "greedy choice property"), it often leads to a very simple and efficient solution. A key skill is being able to identify when a greedy strategy is guaranteed to yield a globally optimal result.

Time Complexity:
  • Varies by problem, but often very efficient. For array-based problems that require a single pass, the complexity is O(N). If a sorting step is required first (as in many interval-based problems), the complexity becomes O(N log N).
Space Complexity:
  • Typically O(1). Greedy algorithms often operate in-place and only need a few variables to keep track of the current state, making them very memory-efficient.
Strengths:
  • Simplicity and Speed: Greedy solutions are often much simpler to conceptualize and implement than more complex approaches like Dynamic Programming. They are usually very fast.
  • Efficiency: Because they don't explore multiple solution paths, they are computationally inexpensive and have low space requirements.
Weaknesses:
  • Not Always Optimal: The main drawback is that a greedy strategy is short-sighted and does not always produce the correct global optimum. For example, in a classic change-making problem, greedily picking the largest coin denomination doesn't always work for arbitrary coin systems.
  • Proof of Correctness: The most challenging part is proving that the greedy choice at each step will always lead to a globally optimal solution. This can be non-trivial and is often the focus of an interview discussion for a greedy problem.
Step-by-Step Process (Jump Game):

Problem: Jump Game You are given an integer array nums. You are initially positioned at the array's first index, and each element in the array represents your maximum jump length at that position. Return true if you can reach the last index, or false otherwise.

This process uses a greedy approach to determine if the last index is reachable.

  1. Initialize: Create a variable, max_reach, and initialize it to 0. This variable will track the farthest index we can possibly reach at any point.
  2. Loop: Iterate through the array nums with an index i from 0 to the end.
  3. Check Reachability: At each index i, first check if we can even get to this index. This is only possible if i <= max_reach. If i is greater than max_reach, it means we got stuck at a previous step and can never reach i. Return false.
  4. Make Greedy Choice: The greedy choice is to always update our potential to its maximum. Update max_reach to be the maximum of its current value and the farthest we can jump from the current index i. The formula is: max_reach = max(max_reach, i + nums[i]).
  5. Check for Completion: If at any point max_reach becomes greater than or equal to the last index (len(nums) - 1), we know the end is reachable. We can return true immediately.
  6. Terminate: If the loop completes successfully, it means we were able to reach the last index. Return true.
Python Example:

This code solves "Jump Game" (#55).

def can_jump(nums: list[int]) -> bool:
    """
    Determines if you can reach the last index of the array.
    """
    max_reach = 0
    n = len(nums)
    
    for i in range(n):
        # If the current index is beyond our max reach, we can't get here.
        if i > max_reach:
            return False
        
        # Greedy choice: update the farthest we can reach.
        max_reach = max(max_reach, i + nums[i])
        
        # Optimization: if we can already reach the end, we're done.
        if max_reach >= n - 1:
            return True
            
    return False # Should not be reached if the last index is reachable

# Example Usage
nums = [2, 3, 1, 1, 4]
# Path: index 0 (jump 2) -> index 2 (jump 1) -> index 3 (jump 4) -> end
print(f"Can jump for {nums}? {can_jump(nums)}") # Expected Output: True

nums2 = [3, 2, 1, 0, 4]
# From index 0, we can get to 1, 2, or 3.
# From any of those, we land on index 3, which has a jump of 0. We're stuck.
print(f"Can jump for {nums2}? {can_jump(nums2)}") # Expected Output: False
Java Example:
public class GreedyExample {
    public boolean canJump(int[] nums) {
        int maxReach = 0;
        int n = nums.length;

        for (int i = 0; i < n; i++) {
            // If the current index is not reachable, we can't proceed.
            if (i > maxReach) {
                return false;
            }
            
            // Greedy choice: update the maximum reach from the current position.
            maxReach = Math.max(maxReach, i + nums[i]);
            
            // If the max reach is already at or beyond the end, we are done.
            if (maxReach >= n - 1) {
                return true;
            }
        }
        
        // This line is technically only reachable if the input array has 1 element
        // or if the loop completes successfully.
        return true; 
    }

    public static void main(String[] args) {
        GreedyExample solver = new GreedyExample();
        
        int[] nums1 = {2, 3, 1, 1, 4};
        System.out.println("Can jump for [2,3,1,1,4]? " + solver.canJump(nums1)); // true
        
        int[] nums2 = {3, 2, 1, 0, 4};
        System.out.println("Can jump for [3,2,1,0,4]? " + solver.canJump(nums2)); // false
    }
}
Visual Example:

Processing nums = [2,3,1,1,4]:

i=0, num=2:
 - max_reach = max(0, 0+2) = 2
 - We can reach index 2.

i=1, num=3:
 - max_reach = max(2, 1+3) = 4
 - We can reach index 4.
 - max_reach (4) >= last_index (4). Return True.

The algorithm finishes early. Let's trace [3,2,1,0,4]:
i=0, num=3: max_reach = max(0, 0+3) = 3
i=1, num=2: max_reach = max(3, 1+2) = 3
i=2, num=1: max_reach = max(3, 2+1) = 3
i=3, num=0: max_reach = max(3, 3+0) = 3
i=4, num=4:
 - Check reachability: Is i (4) <= max_reach (3)? No.
 - We cannot reach index 4. Return False.
Example Problems:
  • Jump Game (#55, Medium): The canonical greedy problem of determining reachability.
  • Best Time to Buy and Sell Stock (#121, Easy): A simple greedy approach where you track the minimum price seen so far and calculate the maximum profit possible.
  • Gas Station (#134, Medium): A more complex problem where the greedy insight is that if a full circuit is possible, it can start at the station immediately following the segment with the largest net fuel deficit.
  • Partition Labels (#763, Medium): A greedy problem where you iterate through the string, extending the end of the current partition to the farthest last occurrence of any character seen within that partition.
  • Merge Intervals (#56, Medium): While it requires sorting first, the merging step is a greedy process where you decide to merge with the last interval or start a new one.

Backtracking Algorithms

Definition:

Backtracking is a general algorithmic technique for solving problems by recursively exploring all possible paths to a solution. It incrementally builds candidates for the solution and abandons a candidate ("backtracks") as soon as it determines that the candidate cannot possibly be completed to a valid solution. This pruning of the solution space is what distinguishes it from a simple brute-force search. It is effectively a form of Depth-First Search (DFS) applied to the state-space tree of a problem, where each path from the root represents a sequence of choices. The core template is often described as: choose, explore, unchoose.

Time Complexity:
  • Highly dependent on the problem, but typically exponential. The complexity is roughly the number of possible valid solutions multiplied by the time to construct each one.
  • For N items, generating subsets is O(N * 2^N). Generating permutations is O(N * N!). This makes backtracking suitable for problems with relatively small input constraints.
Space Complexity:
  • O(N) in most cases. The space is primarily determined by the maximum depth of the recursion call stack, which is often proportional to the size of the input N. This does not include the space required to store the final list of all solutions.
Strengths:
  • Exhaustive and Correct: It is a template for finding all possible solutions to a problem that involves satisfying a set of constraints.
  • Versatile: It can be applied to a wide range of problems, including combinatorial generation (subsets, permutations), pathfinding (mazes), and constraint satisfaction puzzles (Sudoku, N-Queens).
Weaknesses:
  • Inefficient: The primary drawback is its often-exponential time complexity, which makes it impractical for problems with large inputs.
  • Redundant Computations: Without optimizations like memoization (which would transition the approach towards Dynamic Programming), backtracking can end up solving the same subproblem multiple times.
Step-by-Step Process (Generating Subsets):

Problem: Subsets Given an integer array nums of unique elements, return all possible subsets (the power set).

This process uses backtracking to generate every possible subset.

  1. Initialize: Create a main result list to store all the generated subsets, and a current_subset list to build a subset during recursion.
  2. Define Helper Function: Create a recursive helper function, backtrack(start_index, current_subset). This function will explore all subsets that can be formed starting from start_index.
  3. Add the Current Solution: The first step inside the helper function is to add a copy of the current_subset to the result list. This captures the state at the current step as a valid subset (including the empty subset on the first call).
  4. Explore Choices: Start a for loop that iterates through the input nums from start_index to the end of the array. The start_index prevents us from reusing elements and generating duplicate subsets.
  5. Choose: Inside the loop, make a choice. Add the current element nums[i] to current_subset.
  6. Explore: Make a recursive call to backtrack to explore all further possibilities with this choice included. Crucially, the next call starts from i + 1 to ensure that each element is only considered once per subset.
  7. Unchoose (Backtrack): After the recursive call returns, undo the choice by removing the element that was just added (current_subset.pop()). This is the most critical step, as it allows the for loop to proceed to its next iteration, exploring the path where nums[i] was not included.
  8. Initial Call: Begin the entire process by calling backtrack(0, []) from the main function.
Python Example:

This code solves "Subsets" (#78).

def subsets(nums: list[int]) -> list[list[int]]:
    """
    Generates all possible subsets from a list of unique integers.
    """
    result = []
    current_subset = []

    def backtrack(start_index):
        # 1. Add the current combination as a valid subset
        result.append(list(current_subset))
        
        # 2. Explore further choices
        for i in range(start_index, len(nums)):
            # Choose
            current_subset.append(nums[i])
            # Explore
            backtrack(i + 1)
            # Unchoose (backtrack)
            current_subset.pop()

    backtrack(0)
    return result

# Example Usage
nums = [1, 2, 3]
# Expected Output: [[], [1], [1, 2], [1, 2, 3], [1, 3], [2], [2, 3], [3]]
# The order of subsets may vary.
print(f"Subsets of {nums}: {subsets(nums)}")
Java Example:
import java.util.ArrayList;
import java.util.List;

public class BacktrackingExample {

    public List<List<Integer>> subsets(int[] nums) {
        List<List<Integer>> result = new ArrayList<>();
        backtrack(result, new ArrayList<>(), nums, 0);
        return result;
    }

    private void backtrack(List<List<Integer>> result, List<Integer> currentSubset, int[] nums, int start) {
        // 1. Add a copy of the current combination
        result.add(new ArrayList<>(currentSubset));
        
        // 2. Explore further choices
        for (int i = start; i < nums.length; i++) {
            // Choose
            currentSubset.add(nums[i]);
            // Explore
            backtrack(result, currentSubset, nums, i + 1);
            // Unchoose (backtrack)
            currentSubset.remove(currentSubset.size() - 1);
        }
    }

    public static void main(String[] args) {
        BacktrackingExample solver = new BacktrackingExample();
        int[] nums = {1, 2, 3};
        System.out.println("Subsets of [1, 2, 3]: " + solver.subsets(nums));
    }
}
Visual Example:

Visualizing the recursion tree for nums = [1, 2]:

                    backtrack(start=0, current=[])
                         |
           (add [] to result)
           /                   \
(i=0, choose 1)             (loop finishes)
         |
backtrack(start=1, current=[1])
         |
(add [1] to result)
       /                \
(i=1, choose 2)      (loop finishes)
     |
backtrack(start=2, current=[1,2])
     |
(add [1,2] to result)
     |
(loop finishes, unchoose 2) current=[1]
     |
(unchoose 1) current=[]
         |
       ... (next iteration of first call's loop would start, but it's done)

The process would similarly explore the path where 1 is not chosen,
leading to the generation of [2].
Example Problems:

Dynamic Programming

Definition:

Dynamic Programming (DP) is an algorithmic optimization technique for solving complex problems by breaking them down into a collection of simpler, overlapping subproblems. The key idea is to solve each subproblem only once and store its result in a cache or table. When the same subproblem is encountered again, its result is retrieved from the cache instead of being recomputed. This avoids redundant work and can drastically improve performance from exponential to polynomial time.

For a problem to be solvable with DP, it must exhibit two properties:

  1. Overlapping Subproblems: The problem can be broken down into smaller subproblems that are reused multiple times.
  2. Optimal Substructure: The optimal solution to the overall problem can be constructed from the optimal solutions of its subproblems.

There are two main DP strategies:

  • Top-Down (Memoization): A recursive approach where the function is written naturally, but results are stored in a cache (e.g., a hash map or array). The function first checks the cache before computing the result.
  • Bottom-Up (Tabulation): An iterative approach where you build a DP table (usually an array) from the ground up, starting with the base cases and filling the table towards the final answer.
Time Complexity:
  • Varies by problem, but is typically polynomial (e.g., O(N), O(N^2)). For a problem with N states, where each state takes a constant number of transitions to solve, the complexity would be O(N).
Space Complexity:
  • Typically O(N) or O(N^2) to store the DP table or the cache for memoization. In some 1D problems, this can be optimized down to O(1) if only a few previous results are needed to compute the next one.
Strengths:
  • Efficiency: Drastically improves time complexity compared to naive recursive or backtracking solutions by eliminating redundant computations.
  • Structured Approach: Provides a systematic way to solve a wide variety of optimization and counting problems.
Weaknesses:
  • Can be Unintuitive: Identifying the states and the recurrence relation (the DP formula) is often the most challenging part and requires practice.
  • Space Usage: The table or cache can require significant memory, which might be a constraint. While space optimization is sometimes possible, it can make the code harder to read.
Step-by-Step Process (Climbing Stairs):

Problem: Climbing Stairs You are climbing a staircase. It takes n steps to reach the top. Each time you can either climb 1 or 2 steps. In how many distinct ways can you climb to the top?

This process uses bottom-up DP (tabulation) to solve the problem.

  1. Identify Subproblems & State: The problem of finding the number of ways to reach step n, let's call it dp[n], depends on the number of ways to reach the preceding steps. The "state" is the step number i.
  2. Define Recurrence Relation: To get to step i, you must have come from either step i-1 (by taking a 1-step) or step i-2 (by taking a 2-step). Therefore, the number of ways to reach step i is the sum of the ways to reach i-1 and i-2. The relation is: dp[i] = dp[i-1] + dp[i-2].
  3. Identify Base Cases:
    • dp[0] = 1 (There is one way to be at the start: take zero steps).
    • dp[1] = 1 (There is one way to get to the first step: take one 1-step).
  4. Build DP Table: Create an array dp of size n+1.
  5. Iterate and Fill: Initialize the base cases. Then, loop from i = 2 up to n, filling each entry dp[i] using the recurrence relation.
  6. Final Answer: The solution to the problem is the value at dp[n].
Python Example:

This code solves "Climbing Stairs" (#70), showing both bottom-up and space-optimized solutions.

# --- Bottom-Up (Tabulation) with O(N) space ---
def climb_stairs_tabulation(n: int) -> int:
    if n <= 1:
        return 1
    
    dp = [0] * (n + 1)
    dp[0] = 1
    dp[1] = 1
    
    for i in range(2, n + 1):
        dp[i] = dp[i - 1] + dp[i - 2]
        
    return dp[n]

# --- Bottom-Up with O(1) space optimization ---
def climb_stairs_optimized(n: int) -> int:
    if n <= 1:
        return 1
    
    # We only need the previous two values, not the whole array
    prev2, prev1 = 1, 1 # Corresponds to dp[i-2] and dp[i-1]
    
    for i in range(2, n + 1):
        current = prev1 + prev2
        prev2 = prev1
        prev1 = current
        
    return prev1

# Example Usage
n = 5
print(f"Ways to climb {n} stairs (O(N) space): {climb_stairs_tabulation(n)}") # Expected: 8
print(f"Ways to climb {n} stairs (O(1) space): {climb_stairs_optimized(n)}") # Expected: 8
Java Example:
public class DpExample {

    // --- Bottom-Up (Tabulation) with O(N) space ---
    public int climbStairsTabulation(int n) {
        if (n <= 1) {
            return 1;
        }
        int[] dp = new int[n + 1];
        dp[0] = 1;
        dp[1] = 1;

        for (int i = 2; i <= n; i++) {
            dp[i] = dp[i - 1] + dp[i - 2];
        }
        return dp[n];
    }

    // --- Bottom-Up with O(1) space optimization ---
    public int climbStairsOptimized(int n) {
        if (n <= 1) {
            return 1;
        }
        int prev2 = 1; // ways(i-2)
        int prev1 = 1; // ways(i-1)

        for (int i = 2; i <= n; i++) {
            int current = prev1 + prev2;
            prev2 = prev1;
            prev1 = current;
        }
        return prev1;
    }

    public static void main(String[] args) {
        DpExample solver = new DpExample();
        int n = 5;
        System.out.println("Ways to climb " + n + " stairs (O(N) space): " + solver.climbStairsTabulation(n));
        System.out.println("Ways to climb " + n + " stairs (O(1) space): " + solver.climbStairsOptimized(n));
    }
}
Visual Example:

Filling the DP table for n = 5:

dp array of size 6: [?, ?, ?, ?, ?, ?]

1. Base Cases:
   - dp[0] = 1
   - dp[1] = 1
   - dp table: [1, 1, ?, ?, ?, ?]

2. i = 2:
   - dp[2] = dp[1] + dp[0] = 1 + 1 = 2
   - dp table: [1, 1, 2, ?, ?, ?]

3. i = 3:
   - dp[3] = dp[2] + dp[1] = 2 + 1 = 3
   - dp table: [1, 1, 2, 3, ?, ?]

4. i = 4:
   - dp[4] = dp[3] + dp[2] = 3 + 2 = 5
   - dp table: [1, 1, 2, 3, 5, ?]

5. i = 5:
   - dp[5] = dp[4] + dp[3] = 5 + 3 = 8
   - dp table: [1, 1, 2, 3, 5, 8]

Final Answer: dp[5] = 8.
Example Problems:

Bit Manipulation

Definition:

Bit Manipulation is a technique for solving problems by directly operating on the binary representation of numbers. Instead of treating integers as abstract numerical values, this approach considers them as a sequence of bits (0s and 1s) and uses bitwise operators to perform efficient computations. This allows for clever, low-level optimizations that can significantly improve time and space complexity.

The primary bitwise operators are:

  • & (AND): Sets a bit to 1 only if both corresponding bits are 1.
  • | (OR): Sets a bit to 1 if at least one of the corresponding bits is 1.
  • ^ (XOR): Sets a bit to 1 only if the corresponding bits are different.
  • ~ (NOT): Inverts all the bits.
  • << (Left Shift): Shifts bits to the left, effectively multiplying by 2 for each shift.
  • >> (Right Shift): Shifts bits to the right, effectively dividing by 2 for each shift.
Time Complexity:
  • Typically O(N) for problems requiring a single pass through an array of numbers.
  • For operations on a single integer, the complexity is effectively O(1), as the number of bits is fixed (e.g., 32 or 64).
Space Complexity:
  • Almost always O(1). This is a major advantage, as bit manipulation techniques operate directly on the numbers and do not require auxiliary data structures like hash maps or arrays.
Strengths:
  • Extreme Efficiency: Bitwise operations are processed very quickly by a CPU's ALU, often leading to the most performant solutions possible in terms of both time and space.
  • Memory Savings: It allows for encoding complex state information compactly. For example, a bitmask can represent a set of boolean flags using a single integer.
Weaknesses:
  • Readability: Code using bit manipulation can be difficult to read and understand for developers unfamiliar with the techniques. The logic is often non-obvious and requires thinking in binary.
  • Limited Applicability: It is a specialized technique that is only suitable for problems where the underlying data can be meaningfully manipulated at the bit level.
Step-by-Step Process (Single Number):

Problem: Single Number Given a non-empty array of integers nums, every element appears twice except for one. Find that single one.

This process uses the XOR (^) operator to find the unique number.

  1. Understand XOR Properties: The solution relies on two key properties of XOR:
    • Any number XORed with itself is zero (x ^ x = 0).
    • Any number XORed with zero is itself (x ^ 0 = x).
    • XOR is associative and commutative, so the order of operations does not matter.
  2. Initialize Accumulator: Create an integer variable, unique_number, and initialize it to 0.
  3. Loop and Accumulate: Iterate through every num in the input array nums.
  4. Apply XOR: In each iteration, update the accumulator by XORing it with the current number: unique_number = unique_number ^ num.
  5. Result: As the loop progresses, pairs of identical numbers will be XORed together, canceling each other out and becoming 0. For example, (a ^ b ^ a) becomes (a ^ a) ^ b, which is 0 ^ b, resulting in b.
  6. Terminate: After the loop completes, all paired numbers will have canceled out to 0. The unique_number variable will hold the value of the one element that did not have a pair. Return unique_number.
Python Example:

This code solves "Single Number" (#136).

def single_number(nums: list[int]) -> int:
    """
    Finds the single element that appears only once in an array,
    where every other element appears twice.
    """
    unique_number = 0
    for num in nums:
        unique_number ^= num
    return unique_number

# Example Usage
nums = [4, 1, 2, 1, 2]
# The pairs (1,1) and (2,2) will cancel out, leaving 4.
# (4^1^2^1^2) = 4^(1^1)^(2^2) = 4^0^0 = 4
print(f"The single number is: {single_number(nums)}") # Expected Output: 4

nums2 = [2, 2, 1]
# (2^2^1) = 0^1 = 1
print(f"The single number is: {single_number(nums2)}") # Expected Output: 1
Java Example:
public class BitManipulationExample {

    public int singleNumber(int[] nums) {
        int uniqueNumber = 0;
        for (int num : nums) {
            uniqueNumber ^= num;
        }
        return uniqueNumber;
    }

    public static void main(String[] args) {
        BitManipulationExample solver = new BitManipulationExample();
        
        int[] nums1 = {4, 1, 2, 1, 2};
        // Expected Output: The single number is: 4
        System.out.println("The single number is: " + solver.singleNumber(nums1));

        int[] nums2 = {2, 2, 1};
        // Expected Output: The single number is: 1
        System.out.println("The single number is: " + solver.singleNumber(nums2));
    }
}
Visual Example:

Finding the single number in [4, 1, 2, 1, 2]:

Initialize: result = 0 (binary 000)

1. num = 4 (binary 100)
   - result = 0 ^ 4 = 4
   - result is now: 100

2. num = 1 (binary 001)
   - result = 4 ^ 1 = 5
   - result is now: 100 ^ 001 = 101

3. num = 2 (binary 010)
   - result = 5 ^ 2 = 7
   - result is now: 101 ^ 010 = 111

4. num = 1 (binary 001)
   - result = 7 ^ 1 = 6
   - result is now: 111 ^ 001 = 110

5. num = 2 (binary 010)
   - result = 6 ^ 2 = 4
   - result is now: 110 ^ 010 = 100

Loop ends. Final result is 4.
Example Problems:

Designing a Data Structure

Designing a Data Structure is a problem category where the primary goal is not to implement a single, known algorithm, but to create a new data structure or augment an existing one to support a specific set of operations with given time and space complexity constraints. These problems test a deep understanding of the trade-offs of fundamental data structures like arrays, stacks, queues, hash maps, and linked lists. The challenge lies in composing these building blocks in a creative way to meet all the required performance guarantees.

Design Min Stack

Definition:

A Min Stack is a stack that supports the following operations: push(val), pop(), top(), and getMin() in O(1) constant time.

Time Complexity:
  • For the Min Stack problem, the required time complexity for all core operations (push, pop, top, and getMin) is O(1) (constant time).
Space Complexity:
  • For the Min Stack problem, the space complexity is O(N), where N is the number of elements pushed onto the stack. This is because, in the worst case, we may need to store an auxiliary entry for each element to keep track of the minimum.
Strengths:
  • Practical Relevance: These problems are very similar to real-world software design, where engineers build custom classes and abstractions to manage data efficiently.
  • Tests Foundational Knowledge: They effectively test a candidate's understanding of how basic data structures work and how their performance characteristics (time vs. space) influence design choices.
Weaknesses:
  • Requires Creativity: Unlike standard algorithm patterns, there isn't always a single, obvious template. The solution often requires an "aha" moment or insight into how to combine different structures.
  • Edge Case Prone: Designing a robust data structure requires careful handling of edge cases, such as operations on an empty structure, duplicates, and the order of internal state updates.
Step-by-Step Process (Min Stack):

Problem: Min Stack Design a stack that supports push(val), pop(), top(), and getMin() in O(1) constant time.

This process uses two stacks to achieve the required performance.

  1. Analyze the Challenge: The standard stack operations (push, pop, top) are already O(1). The core challenge is implementing getMin() in O(1). A naive approach of iterating through the stack to find the minimum would be O(N), which is too slow.
  2. Core Insight (Auxiliary Stack): To retrieve the minimum in O(1), it must always be available at the top of some data structure. We can use an auxiliary second stack, let's call it min_stack, to track the history of minimums.
  3. Design the Methods:
    • Initialize: Create two stacks: data_stack for the actual elements and min_stack to track minimums.
    • push(val):
      1. Always push val onto the data_stack.
      2. Check the min_stack: If min_stack is empty OR if val is less than or equal to the current minimum (min_stack.peek()), push val onto the min_stack as well. This ensures the min_stack's top is always the current minimum.
    • pop():
      1. Pop the value from the data_stack.
      2. Check if this value is the current minimum (i.e., value == min_stack.peek()). If it is, we must also pop from the min_stack to expose the previous minimum.
    • top():
      • Return the value at the top of the data_stack.
    • getMin():
      • Return the value at the top of the min_stack.
Python Example:

This code solves "Min Stack" (#155).

class MinStack:

    def __init__(self):
        self.data_stack = []
        self.min_stack = []

    def push(self, val: int) -> None:
        self.data_stack.append(val)
        # If the min_stack is empty or the new value is a new minimum
        if not self.min_stack or val <= self.min_stack[-1]:
            self.min_stack.append(val)

    def pop(self) -> None:
        if not self.data_stack:
            return
        
        popped_value = self.data_stack.pop()
        # If the popped value was the minimum, remove it from min_stack too
        if popped_value == self.min_stack[-1]:
            self.min_stack.pop()

    def top(self) -> int:
        if not self.data_stack:
            return None
        return self.data_stack[-1]

    def getMin(self) -> int:
        if not self.min_stack:
            return None
        return self.min_stack[-1]

# Example Usage
minStack = MinStack()
minStack.push(-2)
minStack.push(0)
minStack.push(-3)
print(minStack.getMin())  # returns -3
minStack.pop()
print(minStack.top())     # returns 0
print(minStack.getMin())  # returns -2
Java Example:
import java.util.Stack;

class MinStack {
    private Stack<Integer> dataStack;
    private Stack<Integer> minStack;

    public MinStack() {
        dataStack = new Stack<>();
        minStack = new Stack<>();
    }
    
    public void push(int val) {
        dataStack.push(val);
        // If minStack is empty or the new value is a new minimum
        if (minStack.isEmpty() || val <= minStack.peek()) {
            minStack.push(val);
        }
    }
    
    public void pop() {
        if (dataStack.isEmpty()) {
            return;
        }
        int poppedValue = dataStack.pop();
        // If the popped value was the minimum, remove it from minStack too
        // Use .equals() for Integer object comparison
        if (poppedValue.equals(minStack.peek())) {
            minStack.pop();
        }
    }
    
    public int top() {
        return dataStack.peek();
    }
    
    public int getMin() {
        return minStack.peek();
    }
}
Visual Example:

Tracing a sequence of operations: push(-2), push(0), push(-3), pop()

1. push(-2):
   - data_stack: [-2]
   - min_stack:  [-2]  (is a new min)

2. push(0):
   - data_stack: [-2, 0]
   - min_stack:  [-2]  (0 is not <= -2)

3. push(-3):
   - data_stack: [-2, 0, -3]
   - min_stack:  [-2, -3] (is a new min)
   - getMin() -> -3

4. pop():
   - popped_value = -3.
   - -3 is the top of min_stack. Pop from both.
   - data_stack: [-2, 0]
   - min_stack:  [-2]
   - getMin() -> -2
Example Problems:

Design Trie (Prefix Tree)

Definition:

A Trie, also known as a Prefix Tree or Digital Tree, is a specialized tree-like data structure used to store and retrieve a dynamic set of strings. Each node in the Trie represents a single character of a string. A path from the root to any given node represents a prefix. A boolean flag in the node is often used to mark if that path also represents the end of a complete word. Tries are highly efficient for prefix-based operations, such as autocomplete suggestions and spell checkers.

Time Complexity:

For a Trie containing N words with an average length of L:

  • Insert Word: O(L). The time is proportional to the length of the word being inserted.
  • Search for Word: O(L). The time is proportional to the length of the word being searched.
  • Search for Prefix: O(L). The time is proportional to the length of the prefix.

Note that these operations are independent of the number of words N stored in the Trie, which is its main advantage.

Space Complexity:
  • O(C * L_total), where C is the size of the character set (e.g., 26 for lowercase English letters) and L_total is the sum of the lengths of all words. The space is determined by the total number of nodes, which depends on the number of unique prefixes.
Strengths:
  • Fast Prefix Operations: It is significantly faster than hash-table-based or linear search approaches for prefix-based queries like startsWith.
  • Efficient Storage of Common Prefixes: Words that share a common prefix also share the same nodes in the Trie, which can lead to efficient storage.
  • Alphabetical Ordering: Tries can be traversed to retrieve all stored words in alphabetical order.
Weaknesses:
  • High Memory Usage: Tries can be very memory-intensive. Each node may store an array or map of pointers to its children, which can consume a large amount of space, especially for a large character set or for words with few shared prefixes.
  • Specialized: It is a highly specialized data structure, optimized almost exclusively for prefix-based string operations.
Step-by-Step Process (Implement Trie):

This process describes the implementation of the core methods for a Trie.

  1. Define the TrieNode:
    • Create a TrieNode class. Each node must contain:
      • A data structure for its children, typically a hash map (char -> TrieNode) or a fixed-size array (TrieNode[26]).
      • A boolean flag, isEndOfWord, initialized to false, to indicate if a word ends at this node.
  2. Initialize the Trie:
    • The main Trie class should have one attribute: a root, which is a new, empty TrieNode.
  3. Implement insert(word):
    • Start a pointer current at the root.
    • For each character ch in the word:
      • Check if ch exists in the children of the current node. If not, create a new TrieNode for it.
      • Move the current pointer to that child node.
    • After the loop, set current.isEndOfWord = true.
  4. Implement search(word):
    • Start a pointer current at the root.
    • Traverse the Trie according to the characters in word. If at any point a character's corresponding node does not exist, the word is not in the Trie. Return false.
    • If the loop completes, the word is present only if the final node's isEndOfWord flag is true.
  5. Implement startsWith(prefix):
    • This is nearly identical to search. Traverse the Trie according to the characters in prefix.
    • If the traversal completes without falling off the tree, the prefix exists. Return true. (The isEndOfWord flag does not matter).
Python Example:

This code solves "Implement Trie (Prefix Tree)" (#208).

class TrieNode:
    def __init__(self):
        self.children = {}  # Map character to TrieNode
        self.is_end_of_word = False

class Trie:
    def __init__(self):
        self.root = TrieNode()

    def insert(self, word: str) -> None:
        current = self.root
        for char in word:
            if char not in current.children:
                current.children[char] = TrieNode()
            current = current.children[char]
        current.is_end_of_word = True

    def search(self, word: str) -> bool:
        current = self.root
        for char in word:
            if char not in current.children:
                return False
            current = current.children[char]
        return current.is_end_of_word

    def startsWith(self, prefix: str) -> bool:
        current = self.root
        for char in prefix:
            if char not in current.children:
                return False
            current = current.children[char]
        return True

# Example Usage
trie = Trie()
trie.insert("apple")
print(trie.search("apple"))    # returns True
print(trie.search("app"))      # returns False
print(trie.startsWith("app")) # returns True
trie.insert("app")
print(trie.search("app"))      # returns True
Java Example:
class TrieNode {
    public TrieNode[] children;
    public boolean isEndOfWord;

    public TrieNode() {
        children = new TrieNode[26]; // For 'a' through 'z'
        isEndOfWord = false;
    }
}

class Trie {
    private TrieNode root;

    public Trie() {
        root = new TrieNode();
    }
    
    public void insert(String word) {
        TrieNode current = root;
        for (char ch : word.toCharArray()) {
            int index = ch - 'a';
            if (current.children[index] == null) {
                current.children[index] = new TrieNode();
            }
            current = current.children[index];
        }
        current.isEndOfWord = true;
    }
    
    public boolean search(String word) {
        TrieNode current = root;
        for (char ch : word.toCharArray()) {
            int index = ch - 'a';
            if (current.children[index] == null) {
                return false;
            }
            current = current.children[index];
        }
        return current.isEndOfWord;
    }
    
    public boolean startsWith(String prefix) {
        TrieNode current = root;
        for (char ch : prefix.toCharArray()) {
            int index = ch - 'a';
            if (current.children[index] == null) {
                return false;
            }
            current = current.children[index];
        }
        return true;
    }
}
Visual Example:

Building a Trie with words: "app", "apple", "api"

      (root)
       | 'a'
       V
      (a)
       | 'p'
       V
      (p) --isEndOfWord=true (for "ap")--- 'i' --> (i) --isEndOfWord=true (for "api")
       | 'p'
       V
      (p) --isEndOfWord=true (for "app")
       | 'l'
       V
      (l)
       | 'e'
       V
      (e) --isEndOfWord=true (for "apple")
Example Problems:

Design LRU Cache

Definition:

An LRU (Least Recently Used) Cache is a custom data structure that stores a fixed number of key-value pairs (capacity). Its defining feature is its eviction policy: when the cache is full and a new item needs to be added, the least recently used item is removed to make space. An item is considered "used" if it is accessed via a get operation or updated/inserted via a put operation.

The primary challenge in designing an LRU Cache is to perform both the get(key) and put(key, value) operations in O(1) average time complexity. This requires a combination of data structures to achieve both fast lookups and fast reordering/eviction.

Time Complexity:
  • get(key): O(1). The operation must retrieve the value in constant time.
  • put(key, value): O(1). The operation must insert or update a value in constant time, which may include evicting an old entry.
Space Complexity:
  • O(capacity), as the data structure stores up to the specified capacity of key-value pairs.
Strengths:
  • Optimal Performance: The combination of a hash map and a doubly linked list provides a solution that meets the strict O(1) time complexity requirements for all operations.
  • Practical Application: This is a classic and practical computer science problem that mirrors real-world caching systems used to improve performance in databases, web servers, and operating systems.
Weaknesses:
  • Implementation Complexity: The solution is non-trivial. It requires the careful synchronization of two different data structures. Pointer manipulation in the doubly linked list, especially around the sentinel nodes, must be handled correctly to avoid errors.
  • Boilerplate Code: Implementing the doubly linked list node class and the helper methods for adding/removing nodes can be verbose.
Step-by-Step Process (Design LRU Cache):

The key insight is to combine a Hash Map and a Doubly Linked List.

  • The Hash Map provides O(1) lookup. It will store key -> Node pairs, allowing us to instantly access any node in our linked list.
  • The Doubly Linked List maintains the usage order. Nodes can be added or removed in O(1) time. We will maintain the list such that the most recently used (MRU) item is at one end (e.g., the tail) and the least recently used (LRU) item is at the other (e.g., the head).
  1. Initialize:
    • Create a HashMap (cache) to store key -> Node.
    • Create a Doubly Linked List with two sentinel (dummy) nodes, head and tail, to handle edge cases smoothly. The list will store Node objects, where each Node contains a key, a value, and prev/next pointers.
    • Store the capacity.
  2. Define get(key):
    • Check if the key exists in the cache. If not, return -1.
    • If the key exists, retrieve the node from the cache.
    • Mark this node as most recently used by moving it to the tail of the linked list. This involves removing it from its current position and adding it back at the end.
    • Return the node.value.
  3. Define put(key, value):
    • Check if the key already exists in the cache. If it does, update the value in the corresponding node and move that node to the tail of the list to mark it as most recently used.
    • If the key does not exist:
      • Check if the cache is at full capacity (cache.size() == capacity).
      • If it is full, evict the LRU item. This is the node located right after the head sentinel. Remove this LRU node from both the linked list and the cache.
      • Create a new Node with the given key and value.
      • Add the new node to the cache and to the tail of the linked list.
Python Example:

This code solves "LRU Cache" (#146).

class DLinkedNode:
    def __init__(self, key=0, val=0):
        self.key = key
        self.val = val
        self.prev = None
        self.next = None

class LRUCache:
    def __init__(self, capacity: int):
        self.cache = {}
        self.capacity = capacity
        # Dummy head and tail nodes
        self.head = DLinkedNode()
        self.tail = DLinkedNode()
        self.head.next = self.tail
        self.tail.prev = self.head

    def _remove_node(self, node):
        prev, nxt = node.prev, node.next
        prev.next = nxt
        nxt.prev = prev

    def _add_to_tail(self, node):
        prev = self.tail.prev
        prev.next = node
        node.prev = prev
        node.next = self.tail
        self.tail.prev = node

    def get(self, key: int) -> int:
        if key not in self.cache:
            return -1
        
        node = self.cache[key]
        self._remove_node(node)
        self._add_to_tail(node)
        return node.val

    def put(self, key: int, value: int) -> None:
        if key in self.cache:
            node = self.cache[key]
            node.val = value
            self._remove_node(node)
            self._add_to_tail(node)
        else:
            if len(self.cache) == self.capacity:
                lru_node = self.head.next
                self._remove_node(lru_node)
                del self.cache[lru_node.key]
            
            new_node = DLinkedNode(key, value)
            self.cache[key] = new_node
            self._add_to_tail(new_node)
Java Example:
import java.util.HashMap;
import java.util.Map;

class DLinkedNode {
    int key;
    int value;
    DLinkedNode prev;
    DLinkedNode next;
}

public class LRUCache {
    private Map<Integer, DLinkedNode> cache = new HashMap<>();
    private int capacity;
    private DLinkedNode head, tail;

    public LRUCache(int capacity) {
        this.capacity = capacity;
        head = new DLinkedNode();
        tail = new DLinkedNode();
        head.next = tail;
        tail.prev = head;
    }
    
    private void removeNode(DLinkedNode node) {
        DLinkedNode prev = node.prev;
        DLinkedNode next = node.next;
        prev.next = next;
        next.prev = prev;
    }

    private void addToTail(DLinkedNode node) {
        DLinkedNode prev = tail.prev;
        prev.next = node;
        node.prev = prev;
        node.next = tail;
        tail.prev = node;
    }

    public int get(int key) {
        DLinkedNode node = cache.get(key);
        if (node == null) {
            return -1;
        }
        removeNode(node);
        addToTail(node);
        return node.value;
    }
    
    public void put(int key, int value) {
        DLinkedNode node = cache.get(key);
        if (node != null) {
            node.value = value;
            removeNode(node);
            addToTail(node);
        } else {
            if (cache.size() == capacity) {
                DLinkedNode lruNode = head.next;
                removeNode(lruNode);
                cache.remove(lruNode.key);
            }
            DLinkedNode newNode = new DLinkedNode();
            newNode.key = key;
            newNode.value = value;
            cache.put(key, newNode);
            addToTail(newNode);
        }
    }
}
Visual Example:

Tracing a cache with capacity = 2:

State: Map={}, List: head <-> tail

1. put(1, 1):
   - Map: {1: Node1}
   - List: head <-> Node1 <-> tail

2. put(2, 2):
   - Map: {1: Node1, 2: Node2}
   - List: head <-> Node1 <-> Node2 <-> tail

3. get(1):
   - Found Node1. Move to tail.
   - Map: {1: Node1, 2: Node2}
   - List: head <-> Node2 <-> Node1 <-> tail

4. put(3, 3): (Cache is full, must evict)
   - Evict LRU item: Node2 (head.next).
   - Remove Node2 from list and map.
   - Map: {1: Node1}
   - List: head <-> Node1 <-> tail
   - Add new Node3.
   - Map: {1: Node1, 3: Node3}
   - List: head <-> Node1 <-> Node3 <-> tail
Example Problems:

Remember: The goal isn't just to memorize implementations, but to understand the underlying principles and trade-offs that make each algorithm suitable for specific problems. Practice recognizing patterns and choosing the right algorithmic approach based on problem constraints and requirements.

For part 1, check out Core Algorithms and Patterns Part 1.

For part 2, check out Core Algorithms and Patterns Part 2.

For a comprehensive guide to data structures, check out Core Data Structures and Advanced Data Structures.