Core Algorithms and Patterns Part 1
July 23, 2025
Table of Contents
Core Algorithmic Techniques
Prefix Sum
Definition: A Prefix Sum is a pre-computation technique applied to an array to answer range sum queries efficiently. It involves creating an auxiliary array of size N+1
, often called a prefix sum array, where prefix_sum[i]
stores the cumulative sum of all elements from the start of the original array up to and including index i-1
(or 0 if i=0
). This means prefix_sum[i+1]
contains the sum of elements from index 0 to index i
in the original array. This allows the sum of any continuous sub-array (a "range") to be calculated in constant time.
- Pre-computation (Building the array):
O(N)
, where N is the number of elements in the input array. This is because a single pass is required to build the prefix sum array. - Range Sum Query:
O(1)
. After pre-computation, finding the sum of any range is a simple subtraction operation.
O(N)
to store the auxiliary prefix sum array.
- Extremely Fast Queries: After the initial setup, range sum queries are completed in constant time, which is highly efficient for problems with many queries on a static dataset.
- Simplicity: The logic for both building the prefix sum array and for querying is straightforward to implement and understand.
- Static Data Requirement: The technique is most suitable for static arrays. If elements in the original array are frequently updated, the prefix sum array needs to be partially or fully recomputed, which is an inefficient
O(N)
operation per update. - Space Overhead: It requires extra memory proportional to the size of the input array, which might be a constraint for very large datasets.
Problem: Range Sum Query - Immutable
- Given an integer array
nums
, handle multiple queries to calculate the sum of the elements ofnums
between indicesleft
andright
(inclusive) whereleft <= right
. - Implement the
NumArray
class withsumRange(left, right)
method that returns the sum of elements between indicesleft
andright
inclusive.
This process efficiently answers range sum queries using prefix sums.
- Initialize: Given an input array
nums
of sizeN
, create aprefix_sum
array of sizeN+1
and initialize all its elements to 0. Using sizeN+1
simplifies the range calculation by avoiding out-of-bounds checks for the start of the array. - Build: Iterate through the input array
nums
from indexi = 0
toN-1
. For each element, calculate the cumulative sum:prefix_sum[i+1] = prefix_sum[i] + nums[i]
. - Query: To find the sum of the sub-array from index
left
toright
(inclusive), the result is calculated by subtracting the cumulative sum up to the element beforeleft
from the cumulative sum up toright
. The formula is:sum = prefix_sum[right + 1] - prefix_sum[left]
.
class NumArray:
def __init__(self, nums: list[int]):
# Create a prefix sum array of size len(nums) + 1 for easier calculations.
self.prefix_sum = [0] * (len(nums) + 1)
for i in range(len(nums)):
# prefix_sum[i+1] stores the sum of nums[0]...nums[i]
self.prefix_sum[i+1] = self.prefix_sum[i] + nums[i]
def sumRange(self, left: int, right: int) -> int:
# The sum of elements from index left to right (inclusive) is the
# total sum up to right minus the total sum up to the element before left.
return self.prefix_sum[right + 1] - self.prefix_sum[left]
# Example Usage
nums = [-2, 0, 3, -5, 2, -1]
num_array = NumArray(nums)
# Query for sum of subarray from index 0 to 2 -> (-2 + 0 + 3) = 1
print(f"Sum(0, 2): {num_array.sumRange(0, 2)}") # Expected Output: 1
# Query for sum of subarray from index 2 to 5 -> (3 + -5 + 2 + -1) = -1
print(f"Sum(2, 5): {num_array.sumRange(2, 5)}") # Expected Output: -1
# Query for sum of subarray from index 0 to 5 -> (-2 + 0 + 3 + -5 + 2 + -1) = -3
print(f"Sum(0, 5): {num_array.sumRange(0, 5)}") # Expected Output: -3
Java Example:
public class NumArray {
private int[] prefixSum;
public NumArray(int[] nums) {
// Create a prefix sum array of size nums.length + 1
prefixSum = new int[nums.length + 1];
// The first element is 0 by default, which is our base case.
for (int i = 0; i < nums.length; i++) {
// prefixSum[i+1] stores the sum of nums[0]...nums[i]
prefixSum[i + 1] = prefixSum[i] + nums[i];
}
}
public int sumRange(int left, int right) {
// The sum of elements from index left to right (inclusive) is the
// total sum up to right minus the total sum up to the element before left.
return prefixSum[right + 1] - prefixSum[left];
}
public static void main(String[] args) {
int[] nums = {-2, 0, 3, -5, 2, -1};
NumArray numArray = new NumArray(nums);
// Query for sum of subarray from index 0 to 2 -> (-2 + 0 + 3) = 1
System.out.println("Sum(0, 2): " + numArray.sumRange(0, 2)); // Expected Output: 1
// Query for sum of subarray from index 2 to 5 -> (3 + -5 + 2 + -1) = -1
System.out.println("Sum(2, 5): " + numArray.sumRange(2, 5)); // Expected Output: -1
// Query for sum of subarray from index 0 to 5 -> (-2 + 0 + 3 + -5 + 2 + -1) = -3
System.out.println("Sum(0, 5): " + numArray.sumRange(0, 5)); // Expected Output: -3
}
}
Visual Example:
Original: [-2, 0, 3, -5, 2, -1]
Prefix: [ 0, -2, -2, 1, -4, -2, -3]
Indices: 0 1 2 3 4 5 6
Example Problems:
- Range Sum Query - Immutable (#303, Easy): The direct application of the prefix sum technique.
- Find Pivot Index (#724, Easy): Can be solved efficiently by pre-calculating the total sum, then iterating through the array while tracking the
left_sum
to determine ifleft_sum == total_sum - left_sum - current_element
. - Subarray Sum Equals K (#560, Medium): A common and important variation that uses a hash map to store frequencies of prefix sums to find sub-arrays that sum to a target
k
. - Product of Array Except Self (#238, Medium): This problem uses a similar pre-computation pattern but with products instead of sums, calculating prefix products from the left and suffix products from the right.
Two Pointers
Definition: The Two Pointers technique is an algorithmic pattern primarily used on sorted arrays or linked lists. It involves using two integer pointers to track different indices or positions within the data structure. By moving these pointers based on certain conditions, the technique can efficiently find pairs, triplets, or sub-arrays that satisfy a specific constraint, often reducing the time complexity from a quadratic O(N^2)
brute-force approach to a linear O(N)
.
There are two main variations:
- Converging Pointers: One pointer starts at the beginning (
left
) and the other at the end (right
). They move towards each other until they meet or cross. This is common for search problems in sorted arrays. - Same-Direction Pointers: Both pointers start at the beginning and move in the same direction. One pointer (
fast
orj
) advances through the data to process elements, while the other (slow
ori
) moves conditionally based on certain criteria to mark boundaries or build results. This is different from the Fast & Slow pattern used for cycle detection.
- Traversal:
O(N)
, where N is the number of elements in the array. Each pointer traverses the array at most once. - Overall Solution: If the array needs to be sorted first, the dominant time complexity becomes
O(N log N)
due to the sorting step.
O(1)
for the pointers themselves. The algorithm typically operates in-place.- If sorting is required, the space complexity of the sort can be
O(log N)
orO(N)
, depending on the implementation used.
- Optimal Time Complexity: It often provides a linear time solution (
O(N)
), which is significantly faster than nested loop approaches (O(N^2)
). - Memory Efficiency: It operates in-place, requiring only constant extra space for the pointer variables.
- Often Requires Sorted Data: Many common patterns (especially converging pointers for sum/search problems) only work if the array is sorted. This adds a preliminary
O(N log N)
sorting cost. - Limited Applicability: The technique is not a general-purpose solution. It is only effective for problems where a decision can be made to move one of the pointers based on the values at the current positions or specific problem constraints.
Problem: Two Sum II - Input Array Is Sorted
- Given a 1-indexed array of integers
numbers
that is already sorted in non-decreasing order, find two numbers such that they add up to a specifictarget
number. - Return the indices of the two numbers (1-indexed) as an integer array
[index1, index2]
of length 2. - The tests are generated such that there is exactly one solution, and you may not use the same element twice.
This example finds if a pair of elements in a sorted array sums to a target value. The key insight is that in a sorted array, moving the left pointer right increases the sum, while moving the right pointer left decreases the sum.
- Initialize: Create a
left
pointer at the start of the array (index 0) and aright
pointer at the end (indexN-1
). - Loop: Continue iterating as long as the
left
pointer is less than theright
pointer. In each iteration:- Check Condition: Calculate the sum of the elements at the current pointer positions:
current_sum = nums[left] + nums[right]
. - Move Pointers based on comparison:
- If
current_sum
equals thetarget
, a solution has been found. - If
current_sum
is less than thetarget
, the sum needs to be larger, so move theleft
pointer one step to the right (left++
) to include a larger value. - If
current_sum
is greater than thetarget
, the sum needs to be smaller, so move theright
pointer one step to the left (right--
) to include a smaller value.
- If
- Check Condition: Calculate the sum of the elements at the current pointer positions:
- Terminate: The loop ends when the pointers meet or cross, indicating that all possible pairs have been checked.
This code solves the "Two Sum II" problem, where the input array is already sorted.
def two_sum_sorted(numbers: list[int], target: int) -> list[int]:
"""
Finds two numbers in a sorted list that add up to a target.
Returns the 1-based indices of the two numbers.
"""
left, right = 0, len(numbers) - 1
while left < right:
current_sum = numbers[left] + numbers[right]
if current_sum == target:
# Return 1-based indices as required by the problem
return [left + 1, right + 1]
elif current_sum < target:
left += 1 # Move left pointer to increase the sum
else:
right -= 1 # Move right pointer to decrease the sum
return [] # Return empty list if no solution is found
# Example Usage
nums = [2, 7, 11, 15]
target = 9
print(f"Indices for target {target}: {two_sum_sorted(nums, target)}") # Expected Output: [1, 2]
nums2 = [2, 3, 4]
target2 = 6
print(f"Indices for target {target2}: {two_sum_sorted(nums2, target2)}") # Expected Output: [1, 3]
Java Example:
import java.util.Arrays;
public class TwoPointersExample {
public static int[] twoSumSorted(int[] numbers, int target) {
int left = 0;
int right = numbers.length - 1;
while (left < right) {
int currentSum = numbers[left] + numbers[right];
if (currentSum == target) {
// Return 1-based indices
return new int[]{left + 1, right + 1};
} else if (currentSum < target) {
left++; // Move left pointer to increase the sum
} else {
right--; // Move right pointer to decrease the sum
}
}
// Return empty array or throw an exception if no solution exists
return new int[]{};
}
public static void main(String[] args) {
int[] nums = {2, 7, 11, 15};
int target = 9;
int[] result = twoSumSorted(nums, target);
// Expected Output: Indices for target 9: [1, 2]
System.out.println("Indices for target " + target + ": " + Arrays.toString(result));
int[] nums2 = {2, 3, 4};
int target2 = 6;
int[] result2 = twoSumSorted(nums2, target2);
// Expected Output: Indices for target 6: [1, 3]
System.out.println("Indices for target " + target2 + ": " + Arrays.toString(result2));
}
}
Visual Example:
Finding a target
of 9
in [2, 7, 11, 15]
:
Step 1:
L R
[2, 7, 11, 15] sum = 2 + 15 = 17. 17 > 9. Move R left.
Step 2:
L R
[2, 7, 11, 15] sum = 2 + 11 = 13. 13 > 9. Move R left.
Step 3:
L R
[2, 7, 11, 15] sum = 2 + 7 = 9. 9 == 9. Found. Return [L, R].
Example Problems:
- Two Sum II - Input Array Is Sorted (#167, Medium): The classic application of converging pointers.
- Container With Most Water (#11, Medium): A clever use of converging pointers to maximize an area.
- 3Sum (#15, Medium): A popular and more complex problem where a two-pointer approach is used inside a loop to find triplets.
- Valid Palindrome (#125, Easy): Uses converging pointers to check characters from both ends of a string.
- Squares of a Sorted Array (#977, Easy): A clever use of converging pointers on the original array to build the sorted result array in linear time.
- Remove Duplicates from Sorted Array (#26, Easy): A canonical example of the same-direction pointer pattern where one pointer tracks unique elements while another scans ahead.
Fast and Slow Pointers
Definition: The Fast and Slow Pointers technique is a specific application of the two-pointers pattern, most famously known as Floyd's Tortoise and Hare Algorithm. It involves two pointers moving through a sequence, typically a linked list, at different speeds. The slow
pointer moves one step at a time, while the fast
pointer moves two steps. This difference in speed is used to detect cycles, find the middle of a list, or identify other specific properties within the sequence. If the pointers ever meet, it proves the existence of a cycle.
O(N)
, whereN
is the total number of nodes in the list. In the case of a cycle, theslow
pointer will not traverse more thanN
nodes before meeting thefast
pointer.
O(1)
, as the algorithm only requires two extra variables to store the pointers, regardless of the size of the input list.
- Highly Efficient: Solves cycle-related problems in linear time and constant space, which is optimal.
- Elegant: Provides a clever and concise solution to problems that would otherwise require more complex approaches, such as using a hash set to track visited nodes (which would use
O(N)
space).
- Specialized: Its application is narrow and limited almost exclusively to problems involving cycles in sequences or finding specific positions like the middle element.
- Implementation Nuance: While the concept is simple, correctly handling edge cases (like lists with fewer than two nodes) and implementing variations (like finding the start of the cycle) requires careful attention to detail.
Problem: Linked List Cycle
- Given
head
, the head of a linked list, determine if the linked list has a cycle in it. - There is a cycle in a linked list if there is some node in the list that can be reached again by continuously following the
next
pointer. - Return
true
if there is a cycle in the linked list. Otherwise, returnfalse
.
This process uses Floyd's Tortoise and Hare Algorithm to detect cycles efficiently.
- Initialize: Create two pointers,
slow
andfast
, and point both to thehead
of the linked list. - Loop: Start a loop that continues as long as the
fast
pointer and the node immediately after it (fast.next
) are notnull
. This prevents aNullPointerException
when advancing the fast pointer. - Move Pointers: Inside the loop, advance the pointers at their different speeds:
slow = slow.next
(moves one step)fast = fast.next.next
(moves two steps)
- Check for Intersection: After moving, check if
slow
andfast
are pointing to the same node. Ifslow == fast
, a cycle has been detected, and you can returntrue
. - Terminate: If the loop completes without the pointers meeting, it means the
fast
pointer has reached the end of the list, so no cycle exists. Returnfalse
.
This code detects a cycle in a linked list. A simple ListNode
class is included for context.
class ListNode:
def __init__(self, val=0, next=None):
self.val = val
self.next = next
def has_cycle(head: ListNode) -> bool:
"""
Determines if a linked list has a cycle in it.
"""
if not head or not head.next:
return False
slow = head
fast = head
while fast and fast.next:
if slow == fast:
return True
slow = slow.next
fast = fast.next.next
return False
# Example Usage: 1 -> 2 -> 3 -> 4 -> 2 (cycle)
node1 = ListNode(1)
node2 = ListNode(2)
node3 = ListNode(3)
node4 = ListNode(4)
node1.next = node2
node2.next = node3
node3.next = node4
node4.next = node2 # Creates the cycle
print(f"Does the list have a cycle? {has_cycle(node1)}") # Expected Output: True
# Example 2: 1 -> 2 -> 3 (no cycle)
nodeA = ListNode(1)
nodeB = ListNode(2)
nodeC = ListNode(3)
nodeA.next = nodeB
nodeB.next = nodeC
print(f"Does the list have a cycle? {has_cycle(nodeA)}") # Expected Output: False
Java Example:
class ListNode {
int val;
ListNode next;
ListNode(int x) {
val = x;
next = null;
}
}
public class CycleDetection {
public static boolean hasCycle(ListNode head) {
if (head == null || head.next == null) {
return false;
}
ListNode slow = head;
ListNode fast = head;
while (fast != null && fast.next != null) {
if (slow == fast) {
return true;
}
slow = slow.next;
fast = fast.next.next;
}
return false;
}
public static void main(String[] args) {
// Example Usage: 1 -> 2 -> 3 -> 4 -> 2 (cycle)
ListNode node1 = new ListNode(1);
ListNode node2 = new ListNode(2);
ListNode node3 = new ListNode(3);
ListNode node4 = new ListNode(4);
node1.next = node2;
node2.next = node3;
node3.next = node4;
node4.next = node2; // Creates the cycle
// Expected Output: Does the list have a cycle? true
System.out.println("Does the list have a cycle? " + hasCycle(node1));
// Example 2: 1 -> 2 -> 3 (no cycle)
ListNode nodeA = new ListNode(1);
ListNode nodeB = new ListNode(2);
ListNode nodeC = new ListNode(3);
nodeA.next = nodeB;
nodeB.next = nodeC;
// Expected Output: Does the list have a cycle? false
System.out.println("Does the list have a cycle? " + hasCycle(nodeA));
}
}
Visual Example:
Detecting a cycle in 1 -> 2 -> 3 -> 4 -> 2
:
Initial: Both start at head
S,F
[1] -> [2] -> [3] -> [4] --
^ |
|--------------|
Step 1: Move pointers (slow=1 step, fast=2 steps)
S F
[1] -> [2] -> [3] -> [4] --
^ |
|--------------|
Step 2: Move pointers
S F (wraps to node 2)
[1] -> [2] -> [3] -> [4] --
^ |
|--------------|
Step 3: Move pointers
F S
[1] -> [2] -> [3] -> [4] --
^ |
|--------------|
Step 4: Move pointers
S F
[1] -> [2] -> [3] -> [4] --
^ |
|--------------|
Step 5: Move pointers - they meet at node 2!
S,F
[1] -> [2] -> [3] -> [4] --
^ |
|--------------|
Example Problems:
- Linked List Cycle (#141, Easy): The classic problem of detecting if a cycle exists.
- Middle of the Linked List (#876, Easy): When the
fast
pointer reaches the end of the list, theslow
pointer will be at the middle. - Linked List Cycle II (#142, Medium): A more advanced version that requires finding the exact node where the cycle begins.
- Happy Number (#202, Easy): A creative application where the sequence of numbers generated from the "happy" process is treated as a linked list; if a number repeats, it has entered a cycle.
- Palindrome Linked List (#234, Easy): The technique can be used to find the midpoint, reverse the second half of the list, and then compare the two halves.
Sliding Window
Definition: The Sliding Window technique is an algorithmic approach used for problems that involve finding an optimal or specific contiguous sub-array or sub-string. It uses two pointers, often named left
and right
(or start
and end
), to define a "window" over the data. The right
pointer expands the window, and the left
pointer shrinks it. By sliding this window across the data structure, the algorithm avoids redundant calculations that would occur in a nested-loop brute-force approach, typically achieving a linear time complexity. The window can be of a fixed size or a variable size depending on the problem's constraints.
O(N)
, whereN
is the number of elements in the input array or string. Each pointer (left
andright
) traverses the data at most once.
O(k)
, wherek
is the number of unique elements stored within the window. In problems involving a fixed-size character set (e.g., ASCII), this is often considered constant space,O(1)
.
- Optimal Time Complexity: It efficiently reduces the time complexity for many sub-array and sub-string problems from
O(N^2)
orO(N^3)
down to a linearO(N)
. - Intuitive: The concept of a window sliding over data is a natural way to conceptualize and solve problems involving contiguous segments.
- Limited Scope: The pattern is only applicable to problems on sequential data (arrays, strings) where you need to analyze contiguous blocks. It cannot be used if the sub-array elements are not required to be adjacent.
- Tricky Logic: The conditions for shrinking the window (moving the
left
pointer) can be complex and require careful state management to ensure correctness.
Problem: Longest Substring Without Repeating Characters
- Given a string
s
, find the length of the longest substring without repeating characters. - A substring is a contiguous non-empty sequence of characters within a string.
This example finds the length of the longest substring without repeating characters.
- Initialize: Create a
left
pointer at 0, a variablemax_length
at 0, and a data structure to track characters in the current window (e.g., a hash map storing characters and their most recent indices). - Expand: Start a loop with a
right
pointer, iterating from the beginning to the end of the string. Thisright
pointer expands the window. - Check Validity: At each step, examine the character
char
at theright
pointer's position.- Check if
char
is already in the window's hash map. - If it is, the window contains a duplicate.
- Check if
- Shrink: If the window is invalid (contains a duplicate), shrink it from the left by moving the
left
pointer. A common strategy is to moveleft
to the position immediately after the previous occurrence of the duplicate character. - Update State: After ensuring the window is valid, update the hash map with the current
char
and itsright
index. - Update Result: Calculate the current window's size (
right - left + 1
) and updatemax_length = max(max_length, current_size)
. - Terminate: After the loop completes,
max_length
holds the answer.
This code solves "Longest Substring Without Repeating Characters" (#3).
def length_of_longest_substring(s: str) -> int:
"""
Finds the length of the longest substring without repeating characters.
"""
char_map = {} # Stores the last seen index of each character
left = 0
max_length = 0
for right in range(len(s)):
char = s[right]
# If the character is already in the map and its index is within the current window
if char in char_map and char_map[char] >= left:
# Shrink the window by moving the left pointer
left = char_map[char] + 1
# Update the last seen index of the character
char_map[char] = right
# Update the maximum length found so far
max_length = max(max_length, right - left + 1)
return max_length
# Example Usage
test_string = "pwwkew"
# The longest substring is "wke", with length 3.
print(f"Longest substring length for '{test_string}': {length_of_longest_substring(test_string)}") # Expected Output: 3
test_string_2 = "bbbbb"
# The longest substring is "b", with length 1.
print(f"Longest substring length for '{test_string_2}': {length_of_longest_substring(test_string_2)}") # Expected Output: 1
Java Example:
import java.util.HashMap;
import java.util.Map;
public class SlidingWindowExample {
public static int lengthOfLongestSubstring(String s) {
Map<Character, Integer> charMap = new HashMap<>();
int left = 0;
int maxLength = 0;
for (int right = 0; right < s.length(); right++) {
char currentChar = s.charAt(right);
// If the map contains the character and its last occurrence is within the window
if (charMap.containsKey(currentChar) && charMap.get(currentChar) >= left) {
// Shrink the window from the left.
// Move left to the position after the last occurrence of currentChar.
// We use Math.max to ensure 'left' only moves forward.
left = Math.max(left, charMap.get(currentChar) + 1);
}
// Update the character's last seen index
charMap.put(currentChar, right);
// Calculate and update the max length
maxLength = Math.max(maxLength, right - left + 1);
}
return maxLength;
}
public static void main(String[] args) {
String testString = "pwwkew";
// The longest substring is "wke", with length 3.
System.out.println("Longest substring length for '" + testString + "': " + lengthOfLongestSubstring(testString));
String testString2 = "bbbbb";
// The longest substring is "b", with length 1.
System.out.println("Longest substring length for '" + testString2 + "': " + lengthOfLongestSubstring(testString2));
}
}
Visual Example:
Finding the longest substring in "pwwkew"
:
String: [p, w, w, k, e, w]
Step 1: R=0, L=0. Window: [p]. Max Len: 1.
Step 2: R=1, L=0. Window: [p, w]. Max Len: 2.
Step 3: R=2, L=0. Char 'w' is a repeat. Shrink L.
L was 0, new L becomes index_of_last_w + 1 = 1 + 1 = 2.
Window: [w]. Max Len: 2.
Step 4: R=3, L=2. Window: [w, k]. Max Len: 2.
Step 5: R=4, L=2. Window: [w, k, e]. Max Len: 3.
Step 6: R=5, L=2. Char 'w' is a repeat. Shrink L.
L was 2, new L becomes index_of_last_w + 1 = 2 + 1 = 3.
Window: [k, e, w]. Max Len: 3.
Loop ends. Final max length is 3.
Example Problems:
- Longest Substring Without Repeating Characters (#3, Medium): The classic variable-size window problem.
- Minimum Size Subarray Sum (#209, Medium): Find the smallest contiguous sub-array whose sum is greater than or equal to a target.
- Longest Repeating Character Replacement (#424, Medium): A more complex window where the validity condition involves character counts and a number of allowed replacements.
- Permutation in String (#567, Medium): A fixed-size window problem where you check if any permutation of one string exists as a substring in another.
- Find All Anagrams in a String (#438, Medium): Similar to #567, uses a fixed-size window to find all starting indices of anagrams.
Linked List In-Place Reversal
Definition: The Linked List In-Place Reversal is a fundamental technique for manipulating linked lists. It reverses the order of the nodes in the list by modifying their next
pointers directly, without allocating additional memory for a new list or a secondary data structure. The standard iterative approach uses three pointers (previous
, current
, and next_node
) to traverse the list, reversing the direction of the next
pointer for each node one by one.
O(N)
, whereN
is the number of nodes in the list. The algorithm must visit each node exactly once to reverse its pointer.
O(1)
for the iterative solution. It only requires a few constant-space variables for the pointers, making it highly memory-efficient. A recursive solution would haveO(N)
space complexity due to the call stack depth.
- Memory Efficiency: The primary advantage is its
O(1)
space complexity, which is optimal and crucial in memory-constrained environments. - Foundational Pattern: It serves as a building block for solving many more complex linked list problems, such as reversing a sublist or checking for palindromes.
- Pointer Management: The manipulation of multiple pointers can be confusing and error-prone. It's critical to update the pointers in the correct sequence to avoid losing the reference to the rest of the list.
- Destructive: The process modifies the original list structure. If the original order is needed later, a copy must be made beforehand.
Problem: Reverse Linked List
- Given the
head
of a singly linked list, reverse the list, and return the reversed list. - You should reverse the list in-place without using extra space for another list structure.
This process reverses a linked list iteratively using three pointers.
- Initialize Pointers: Create two pointers:
previous
initialized tonull
, andcurrent
initialized to thehead
of the list. - Loop: Start a loop that continues as long as the
current
pointer is notnull
. - Store the Next Node: Inside the loop, before changing any pointers, save a reference to the next node in the sequence:
next_node = current.next
. This prevents us from losing the rest of the list. - Reverse the Pointer: Re-assign the
current
node'snext
pointer to point backward to theprevious
node:current.next = previous
. - Advance Pointers: Move the
previous
andcurrent
pointers one step forward for the next iteration.previous
becomes the node you just processed, andcurrent
becomes the node you saved in step 3.previous = current
current = next_node
- Terminate: Once the loop finishes,
current
will benull
, andprevious
will be pointing to the last node of the original list, which is now the new head of the reversed list. Returnprevious
.
This code solves "Reverse Linked List" (#206).
class ListNode:
def __init__(self, val=0, next=None):
self.val = val
self.next = next
def reverse_list(head: ListNode) -> ListNode:
"""
Reverses a singly linked list in-place.
"""
previous = None
current = head
while current:
# Store the next node before we overwrite current.next
next_node = current.next
# Reverse the current node's pointer
current.next = previous
# Move pointers one position ahead
previous = current
current = next_node
# The new head is the last node we processed
return previous
# Helper function to create a list from values
def create_list(values):
if not values:
return None
head = ListNode(values[0])
current = head
for val in values[1:]:
current.next = ListNode(val)
current = current.next
return head
# Helper function to print a list
def print_list(node):
while node:
print(node.val, end=" -> ")
node = node.next
print("None")
# Example Usage: 1 -> 2 -> 3 -> 4 -> 5
head = create_list([1, 2, 3, 4, 5])
print("Original list:")
print_list(head)
reversed_head = reverse_list(head)
print("Reversed list:")
print_list(reversed_head) # Expected Output: 5 -> 4 -> 3 -> 2 -> 1 -> None
Java Example:
class ListNode {
int val;
ListNode next;
ListNode(int val) { this.val = val; }
ListNode(int val, ListNode next) { this.val = val; this.next = next; }
}
public class ReverseLinkedList {
public static ListNode reverseList(ListNode head) {
ListNode previous = null;
ListNode current = head;
while (current != null) {
ListNode nextNode = current.next; // Store the next node
current.next = previous; // Reverse the link
previous = current; // Move previous one up
current = nextNode; // Move current one up
}
return previous; // previous is the new head
}
// Helper to print the list
public static void printList(ListNode node) {
while (node != null) {
System.out.print(node.val + " -> ");
node = node.next;
}
System.out.println("null");
}
public static void main(String[] args) {
// Example Usage: 1 -> 2 -> 3 -> 4 -> 5
ListNode head = new ListNode(1, new ListNode(2, new ListNode(3, new ListNode(4, new ListNode(5)))));
System.out.println("Original list:");
printList(head);
ListNode reversedHead = reverseList(head);
System.out.println("Reversed list:");
printList(reversedHead); // Expected Output: 5 -> 4 -> 3 -> 2 -> 1 -> null
}
}
Visual Example:
Reversing 1 -> 2 -> 3 -> null
:
Initial State:
prev curr next
null [1] -> [2] -> [3] -> null
Iteration 1:
1. next = curr.next (points to [2])
2. curr.next = prev (points to null)
[1] -> null
3. prev = curr (prev is now [1])
4. curr = next (curr is now [2])
State:
prev curr next
null <- [1] [2] -> [3] -> null
Iteration 2:
1. next = curr.next (points to [3])
2. curr.next = prev (points to [1])
[2] -> [1] -> null
3. prev = curr (prev is now [2])
4. curr = next (curr is now [3])
State:
prev curr next
null <- [1] <- [2] [3] -> null
Iteration 3:
1. next = curr.next (points to null)
2. curr.next = prev (points to [2])
[3] -> [2] -> [1] -> null
3. prev = curr (prev is now [3])
4. curr = next (curr is now null)
Loop terminates. Return prev.
Final List: [3] -> [2] -> [1] -> null
Example Problems:
- Reverse Linked List (#206, Easy): The canonical problem for this pattern.
- Reverse Linked List II (#92, Medium): Requires reversing only a specific portion of the list, from position
m
ton
. - Palindrome Linked List (#234, Easy): A common solution involves finding the middle of the list, reversing the second half, and then comparing it with the first half.
- Reverse Nodes in k-Group (#25, Hard): A more complex version where you reverse the list in contiguous chunks of size
k
. - Swap Nodes in Pairs (#24, Medium): While not a full reversal, it uses the same underlying pointer manipulation principles.
Hash Maps for Frequency and Lookups
Definition: A Hash Map (also known as a dictionary, hash table, or associative array) is a data structure that stores data as key-value pairs. Its primary strength is providing average-case constant time complexity, O(1)
, for insertions, deletions, and lookups. This makes it invaluable for algorithmic problems that require efficient data retrieval, state tracking, or eliminating nested loops. In algorithms, it's most often used in three ways:
- Frequency Counting: Keys are the elements from an input, and values are their counts (frequencies).
- Fast Lookups: Storing seen elements or their properties to quickly check for their existence or retrieve associated data.
- Grouping and Classification: For problems like anagrams, a canonical representation of an item (like its sorted form) is used as the key to group all items that share that representation.
- Insertion, Deletion, Lookup: Average
O(1)
. The worst case isO(N)
due to hash collisions. - Overall Algorithm Complexity: Varies by problem. For simple operations:
O(N)
where N is the number of elements. For problems requiring key transformation (like sorting for anagrams):O(N * f(K))
where f(K) is the cost of the key transformation operation.
O(N)
toO(N * K)
depending on the problem. For frequency counting:O(N)
for the distinct elements. For grouping problems: up toO(N * K)
where K is the average size of stored values.
- Extremely Fast: The
O(1)
average time for map operations is its greatest advantage, allowing for efficient grouping and retrieval. - Versatile and Flexible: It's a general-purpose tool that can solve a vast range of problems, including finding pairs, checking for duplicates, counting frequencies, and grouping items by a shared property.
- Memory Usage: The main trade-off is space. A hash map can consume significant memory, up to
O(N * K)
, which can be a limitation for very large datasets. - Key Generation Overhead: The overall performance depends on the efficiency of generating the canonical key for each item (e.g., the cost of sorting each string).
Problem: Group Anagrams
- Given an array of strings
strs
, group the anagrams together. You can return the answer in any order. - An Anagram is a word or phrase formed by rearranging the letters of a different word or phrase, typically using all the original letters exactly once.
This process groups a list of words into lists of anagrams.
- Initialize: Create an empty hash map,
anagram_map
. The keys will be the canonical representation of a word (e.g., the sorted string), and the values will be the list of words that are anagrams of each other. - Loop: Iterate through each
word
in the input list of strings. - Create Canonical Key: For each
word
, create a key that will be identical for all its anagrams. A simple and effective method is to sort the characters of theword
. For example, both "eat" and "tea" produce the key "aet". - Group by Key: Use the generated key to store the
word
in the hash map.- If the key does not exist in
anagram_map
, create a new entry where the key maps to a new list containing the currentword
. - If the key already exists, append the current
word
to the list at that key.
- If the key does not exist in
- Terminate: After iterating through all the words, the hash map will contain all the anagram groups. The final result is the collection of all values (the lists of words) from the map.
This code solves the "Group Anagrams" problem (#49).
from collections import defaultdict
def group_anagrams(strs: list[str]) -> list[list[str]]:
"""
Groups anagrams together from a list of strings.
"""
# Use a defaultdict to simplify appending to new lists
anagram_map = defaultdict(list)
for word in strs:
# Sort the word to create a canonical key
sorted_word = "".join(sorted(word))
# Append the original word to the list for that key
anagram_map[sorted_word].append(word)
# The values of the map are the groups of anagrams
return list(anagram_map.values())
# Example Usage
words = ["eat", "tea", "tan", "ate", "nat", "bat"]
# Expected Output: [['eat', 'tea', 'ate'], ['tan', 'nat'], ['bat']]
print(f"Anagram groups: {group_anagrams(words)}")
Java Example:
import java.util.*;
public class GroupAnagramsExample {
public static List<List<String>> groupAnagrams(String[] strs) {
if (strs == null || strs.length == 0) {
return new ArrayList<>();
}
Map<String, List<String>> anagramMap = new HashMap<>();
for (String word : strs) {
// Convert string to char array, sort it, then convert back to string
char[] charArray = word.toCharArray();
Arrays.sort(charArray);
String sortedWord = new String(charArray);
// If the key is not present, computeIfAbsent creates a new list
// Then, add the original word to the list for that key
anagramMap.computeIfAbsent(sortedWord, k -> new ArrayList<>()).add(word);
}
return new ArrayList<>(anagramMap.values());
}
public static void main(String[] args) {
String[] words = {"eat", "tea", "tan", "ate", "nat", "bat"};
List<List<String>> result = groupAnagrams(words);
// Expected Output: Anagram groups: [[eat, tea, ate], [bat], [tan, nat]] (order may vary)
System.out.println("Anagram groups: " + result);
}
}
Visual Example:
Grouping anagrams for ["eat", "tea", "tan", "ate", "nat", "bat"]
:
Array: ["eat", "tea", "tan", "ate", "nat", "bat"]
Map: {}
1. Word: "eat"
- Key = sorted("eat") -> "aet"
- Map: {"aet": ["eat"]}
2. Word: "tea"
- Key = sorted("tea") -> "aet"
- Map: {"aet": ["eat", "tea"]}
3. Word: "tan"
- Key = sorted("tan") -> "ant"
- Map: {"aet": ["eat", "tea"], "ant": ["tan"]}
4. Word: "ate"
- Key = sorted("ate") -> "aet"
- Map: {"aet": ["eat", "tea", "ate"], "ant": ["tan"]}
5. Word: "nat"
- Key = sorted("nat") -> "ant"
- Map: {"aet": ["eat", "tea", "ate"], "ant": ["tan", "nat"]}
6. Word: "bat"
- Key = sorted("bat") -> "abt"
- Map: {"aet": [...], "ant": [...], "abt": ["bat"]}
Final Result: The values of the map -> [["eat", "tea", "ate"], ["tan", "nat"], ["bat"]]
Example Problems:
- Two Sum (#1, Easy): The canonical fast lookup problem.
- Valid Anagram (#242, Easy): A classic frequency counting problem where you compare the character counts of two strings.
- Group Anagrams (#49, Medium): Uses a hash map to group words. The key can be a sorted version of the word or a character frequency tuple.
- Contains Duplicate (#217, Easy): A simple existence check problem, often solved by iterating and adding to a hash set (a map variant with only keys) and returning true if an element is already present.
- Subarray Sum Equals K (#560, Medium): A more advanced problem that combines a hash map with the prefix sum pattern to find sub-arrays that sum to a target
k
.
Monotonic Stack
Definition: A Monotonic Stack is a specialized use of the stack data structure where the elements are always kept in a specific sorted order (either non-increasing or non-decreasing). This property is maintained by popping elements from the stack that would violate the order before pushing a new element. This technique is highly effective for problems involving finding the next greater/smaller element or the previous greater/smaller element for each item in a sequence. The key logic often happens when an element is popped from the stack, as this signifies that a new, larger/smaller element has been found.
Monotonic means varying in such a way that it either never decreases or never increases. ie. a monotonic increasing array is an array where each element is greater than or equal to the previous element. A monotonic decreasing array is an array where each element is less than or equal to the previous element.
Time Complexity:O(N)
, whereN
is the number of elements in the input array. Although the implementation may appear to have a nested loop, each element is pushed onto and popped from the stack at most once. This results in an amortizedO(1)
time per element across the entire operation.
O(N)
in the worst-case scenario. If the input array is already sorted in a way that no elements are ever popped (e.g., a decreasing array for a "next greater element" problem), the stack could hold allN
elements.
- Highly Efficient: Provides an optimal linear time solution for a class of problems that would naively be solved in
O(N^2)
time by iterating through all subsequent or previous elements for each item. - Powerful Pattern: It's a concise pattern for solving problems related to finding the next or previous boundary (greater or smaller value) for elements in an array.
- Specialized: The pattern is not general-purpose and only applies to a specific set of problems. Recognizing when to use a monotonic stack can be non-obvious.
- Abstract Logic: The logic of why the stack works and what to do when an element is popped can be less intuitive than more straightforward array traversals.
Problem: Next Greater Element I
- The next greater element of some element
x
in an array is the first greater element that is to the right ofx
in the same array. - You are given two distinct 0-indexed integer arrays
nums1
andnums2
, wherenums1
is a subset ofnums2
. - For each element in
nums1
, find the next greater element innums2
and return the results as an array. - If there is no next greater element, return -1 for this number.
This process uses a monotonically decreasing stack to find the next greater element for each number in an array.
- Initialize: Create an empty
stack
(which will store elements or their indices) and aresult
data structure (e.g., an array or hash map) to store the output. - Loop: Iterate through the input array
nums
from left to right. - Maintain Monotonicity: For the current element
num
, start awhile
loop that runs as long as the stack is not empty AND the element at the top of the stack is less thannum
.- Inside the loop, the element at the top of the stack (
stack_top
) has found its "next greater element," which isnum
. - Pop
stack_top
from the stack. - Store this relationship in the result structure (e.g.,
result[stack_top] = num
).
- Inside the loop, the element at the top of the stack (
- Push: After the
while
loop finishes, the stack is guaranteed to be monotonically decreasing again. Push the current elementnum
(or its index) onto the stack. - Terminate: After iterating through all elements, any items remaining on the stack do not have a next greater element. Their result will be a default value (e.g., -1).
This code solves "Next Greater Element I" (#496), a classic application that combines a monotonic stack with a hash map.
def next_greater_element(nums1: list[int], nums2: list[int]) -> list[int]:
"""
Finds the next greater element in nums2 for each element in nums1.
"""
# Map from value x to its next greater element in nums2
next_greater_map = {}
# Use a monotonically decreasing stack
stack = []
# 1. Process nums2 to find the next greater element for all its numbers
for num in nums2:
# While stack is not empty and current num is greater than stack's top
while stack and num > stack[-1]:
# The top of the stack has found its next greater element
next_greater_map[stack.pop()] = num
stack.append(num)
# 2. Any numbers left in the stack have no greater element
for num in stack:
next_greater_map[num] = -1
# 3. Build the result for nums1 using the map
result = [next_greater_map[num] for num in nums1]
return result
# Example Usage
nums1 = [4, 1, 2]
nums2 = [1, 3, 4, 2]
# Expected Output: [-1, 3, -1]
# Explanation:
# - For 4 in nums1, there is no greater element in nums2 to its right. -> -1
# - For 1 in nums1, the next greater is 3.
# - For 2 in nums1, there is no greater element in nums2 to its right. -> -1
print(f"Next greater elements: {next_greater_element(nums1, nums2)}")
Java Example:
import java.util.HashMap;
import java.util.Map;
import java.util.Stack;
import java.util.Arrays;
public class MonotonicStackExample {
public static int[] nextGreaterElement(int[] nums1, int[] nums2) {
Map<Integer, Integer> nextGreaterMap = new HashMap<>();
Stack<Integer> stack = new Stack<>();
// 1. Process nums2 to build the next greater element map
for (int num : nums2) {
while (!stack.isEmpty() && num > stack.peek()) {
nextGreaterMap.put(stack.pop(), num);
}
stack.push(num);
}
// 2. Any numbers left in stack have no greater element
while (!stack.isEmpty()) {
nextGreaterMap.put(stack.pop(), -1);
}
// 3. Build the result array for nums1
int[] result = new int[nums1.length];
for (int i = 0; i < nums1.length; i++) {
result[i] = nextGreaterMap.get(nums1[i]);
}
return result;
}
public static void main(String[] args) {
int[] nums1 = {4, 1, 2};
int[] nums2 = {1, 3, 4, 2};
int[] result = nextGreaterElement(nums1, nums2);
// Expected Output: Next greater elements: [-1, 3, -1]
System.out.println("Next greater elements: " + Arrays.toString(result));
}
}
Visual Example:
Processing nums2 = [1, 3, 4, 2]
to build the map:
Array: [1, 3, 4, 2]
Stack: []
Map: {}
1. num = 1: Stack is empty. Push 1.
- Stack: [1]
2. num = 3: 3 > stack.top (1). Pop 1. Set map[1]=3. Push 3.
- Stack: [3]
- Map: {1: 3}
3. num = 4: 4 > stack.top (3). Pop 3. Set map[3]=4. Push 4.
- Stack: [4]
- Map: {1: 3, 3: 4}
4. num = 2: 2 is not > stack.top (4). Push 2.
- Stack: [4, 2]
- Map: {1: 3, 3: 4}
Loop ends. Process remaining stack:
- Pop 2. Set map[2]=-1.
- Pop 4. Set map[4]=-1.
- Final Map: {1: 3, 3: 4, 2: -1, 4: -1}
Example Problems:
- Next Greater Element I (#496, Easy): The canonical problem to learn the pattern.
- Daily Temperatures (#739, Medium): A direct application where you find the next warmer day (next greater element).
- Next Greater Element II (#503, Medium): A variation where the input array is circular.
- Largest Rectangle in Histogram (#84, Hard): A classic hard problem that uses a monotonic stack to find the previous and next smaller elements for each bar to determine the potential width of the rectangle.
- Sum of Subarray Minimums (#907, Medium): An advanced application where you find the previous and next smaller elements to count how many sub-arrays each element is the minimum of.
Remember: The goal isn't just to memorize implementations, but to understand the underlying principles and trade-offs that make each algorithm suitable for specific problems. Practice recognizing patterns and choosing the right algorithmic approach based on problem constraints and requirements.
For part 2, check out Core Algorithms and Patterns Part 2.
For part 3, check out Core Algorithms and Patterns Part 3.
For a comprehensive guide to data structures, check out Core Data Structures and Advanced Data Structures.