📖 25 min read
Last updated on

Data Structures for Coding Interviews: Complete Guide with Patterns, Examples, and Study Plan


Most coding interview problems test the same thing: can you pick the right data structure and explain why it’s the right one?

That question trips up more candidates than you’d expect. Not because they lack knowledge, but because they never practiced the decision under pressure. They memorized solutions to individual problems without building the instinct to look at a new constraint and immediately reach for the correct tool.

The good news: you don’t need to master every data structure in a computer science textbook. Interviewers at Google, Meta, Amazon, and every competitive tech company draw from a surprisingly small set. Ten foundational data structures — arrays, hash maps, linked lists, stacks, queues, heaps, trees, graphs, tries, and union find — cover the vast majority of coding interview questions across every difficulty level. (For a breakdown of which data structures each company favors, see our FAANG coding interview guide.)

This guide teaches you all ten, including:

Whether you’re preparing for your first coding interview or refining your approach for senior roles, this is the data structures roadmap you need.

Want to practice data structure problems under realistic interview conditions? Try a mock coding interview on Intervu →


The Most Important Data Structures for Coding Interviews

Before going deeper on each one, a quick map of the landscape. These are the core data structures to learn for coding interviews, why interviewers use them, and the types of problems where they appear.

Data StructureWhy Interviewers Use ItExample Problems
ArraysIndexing, iteration, in-place manipulationTwo Sum, Maximum Subarray, Rotate Array
Hash MapsTrading space for time; algorithmic thinkingGroup Anagrams, Subarray Sum Equals K, LRU Cache
Linked ListsPointer reasoning, state managementReverse Linked List, Merge Two Sorted Lists, Linked List Cycle
StacksLIFO patterns, recursive thinkingValid Parentheses, Daily Temperatures, Largest Rectangle
QueuesBFS thinking, level-order traversalBinary Tree Level Order Traversal, Rotting Oranges, Sliding Window Maximum
HeapsMaintaining ordered data efficientlyKth Largest Element, Merge K Sorted Lists, Find Median from Data Stream
TreesRecursion, traversal, divide-and-conquerInvert Binary Tree, Lowest Common Ancestor, Serialize and Deserialize
GraphsDeeper algorithmic thinking, modelingNumber of Islands, Course Schedule, Word Ladder
TriesPrefix-based search, dictionary operationsImplement Trie, Word Search II, Autocomplete
Union FindDynamic grouping, connectivity queriesAccounts Merge, Number of Provinces, Redundant Connection

Now let’s go deeper on each one.


What Data Structures Should You Learn First

If you’re starting from scratch, the order matters. Each structure builds on concepts from the previous one, and the interview-frequency distribution isn’t uniform.

  1. Arrays and Strings — the foundation. 60%+ of interview problems involve sequences.
  2. Hash Maps and Sets — the most common optimization tool. Converts O(n²) brute force to O(n).
  3. Two Pointers and Sliding Window — the first algorithmic patterns you’ll need, both built on arrays.
  4. Stacks — introduces LIFO thinking and prepares you for monotonic patterns.
  5. Queues — introduces FIFO and BFS, the gateway to graph problems.
  6. Linked Lists — builds pointer reasoning that transfers directly to trees.
  7. Trees — where recursion becomes essential. The most-tested structure at medium difficulty.
  8. Graphs — the deepest test of algorithmic thinking. Builds on BFS/DFS from queues and trees.
  9. Heaps — rounds out your toolkit for top-k and streaming problems.
  10. Tries — specialized prefix matching. Appears in string-heavy interview rounds.
  11. Union Find — clean dynamic connectivity. Shows up in grouping and component problems.

Why this order works: each step introduces one new concept while reinforcing the previous ones. Arrays teach iteration. Hash maps teach the space-time tradeoff. Trees teach recursion. By the time you reach graphs, you’re applying BFS (from queues), DFS (from trees), and hash maps (from step 2) all at once.

For a structured study plan that follows this progression with specific problems and checkpoints, see the four-week plan below or the detailed Grind 75 Pathway.


Arrays and Strings

Two Pointers: The Showdown

Arrays are the most frequently appearing data structure in coding interviews. They’re simple enough to use without setup overhead, but rich enough to support a wide range of algorithmic challenges. Almost every interview problem involving sequences, intervals, or ordered data will touch arrays or strings.

What interviewers actually test with array problems: whether you can recognize efficient patterns and avoid brute-force iteration when something better exists.

How Arrays Appear in Interviews

Most array problems follow a predictable structure: you’re given an array and need to find elements, subarrays, or pairs that satisfy some condition. The brute-force approach is almost always nested loops (O(n²)). The interview question is whether you can do better.

Three patterns handle the majority of array problems:

Two Pointers. Appears whenever you’re working with a sorted array or need to find pairs that satisfy some condition. Place one pointer at each end and move them based on the comparison result.

Sliding Window. Tracks something across a contiguous subarray or substring. Maintain a window defined by left and right pointers, expanding and contracting as you traverse.

Prefix Sums. Lets you answer range-sum queries in O(1) after O(n) preprocessing. Store cumulative sums so any subarray sum can be computed by subtraction.

Common Mistakes with Arrays

  • Using nested loops when two pointers or a hash map would give O(n)
  • Forgetting to handle empty arrays and single-element edge cases
  • Off-by-one errors on window boundaries in sliding window problems
  • Not clarifying whether the array is sorted (which unlocks two pointers and binary search)

Practice Problems: Arrays

When you see a problem asking about contiguous subarrays or pairs in an array, slow down and think about which of these three patterns fits. Explain your reasoning out loud. That’s what interviewers are listening for.


Hash Maps

O(1) vs O(n) — choose wisely

If arrays are the most common data structure in interviews, hash maps are the one candidates reach for most. They’re the Swiss Army knife of algorithmic problem-solving: whenever you need O(1) lookup, frequency counting, or deduplication, a hash map is likely the right tool.

How Hash Maps Appear in Interviews

The signal interviewers look for: do you instinctively recognize when a problem can be solved by trading memory for better time complexity? If your first instinct is to scan an array repeatedly, you’re leaving efficiency on the table. If you immediately think “hash map,” you’ve demonstrated algorithmic intuition.

Ask yourself: Am I searching for something repeatedly? Am I counting occurrences? Do I need to remember something I’ve seen before? If yes, a hash map belongs in your solution.

Hash maps also underpin more complex structures. LRU Cache combines a hash map with a doubly linked list, a system-design-flavored coding problem that tests whether you can compose data structures together.

Common Mistakes with Hash Maps

  • Introducing a hash map without discussing the space complexity tradeoff. Interviewers often ask “Can you solve this with less space?” Being prepared shows depth.
  • Using a hash map when a sorted approach or two pointers would avoid the extra space entirely
  • Forgetting to handle hash collisions conceptually (interviewers sometimes ask about this)
  • Not recognizing that sets are hash maps without values, useful for deduplication

Practice Problems: Hash Maps

  • Two Sum
  • Group Anagrams
  • Longest Consecutive Sequence
  • Subarray Sum Equals K
  • LRU Cache
  • Top K Frequent Elements

Linked Lists

Reversing a Linked List: Don't Lose the Next Pointer

Linked list problems require a different kind of careful, deliberate thinking. Unlike arrays where you can jump to any index, linked lists force sequential traversal and manual pointer management. One wrong pointer assignment and your solution silently corrupts the list.

How Linked Lists Appear in Interviews

Interviewers use linked list problems to evaluate whether you can reason carefully about state: tracking multiple pointers simultaneously without losing your place. These problems are also a filter for candidates who can handle edge cases (empty list, single node, cycles) without being prompted.

Three patterns cover most linked list problems:

Slow and fast pointers (Floyd’s Cycle Detection). One pointer moves one step, the other moves two. This detects cycles, finds the middle of a list, and identifies the cycle start. See our Linked List Cycle walkthrough for the full explanation of why the pointers are guaranteed to meet.

Reversal. Reversing a list (or a portion) in place requires tracking previous, current, and next nodes simultaneously. The Reverse Linked List walkthrough covers both the iterative and recursive approaches with every pointer mistake named.

Merging. Merging two sorted lists requires careful pointer manipulation and handling the case where one list runs out. The dummy node pattern eliminates head-insertion special cases. See Merge Two Sorted Lists.

Common Mistakes with Linked Lists

  • Not using a dummy/sentinel node to simplify head-insertion logic
  • Losing the next pointer before reassigning current.next during reversal
  • Forgetting to handle single-node and empty-list edge cases
  • Not drawing a diagram before coding. Sketch boxes and arrows. This prevents pointer errors and communicates that you think carefully before writing code.

Practice Problems: Linked Lists


Stacks

LIFO: Last In, First Out

Stacks follow Last-In, First-Out (LIFO), and that property is surprisingly powerful for a specific class of interview problems: those involving matching, nesting, or tracking the most recently seen element.

How Stacks Appear in Interviews

Whenever you see balanced brackets, nested structures, or a “most recent valid state” concept, think stack. The key insight: stacks naturally model undo behavior. The last thing you pushed is the first thing you’ll need to check.

A more advanced pattern is the monotonic stack, which maintains elements in strictly increasing or decreasing order. The idea: as you iterate through an array, you pop elements from the stack that violate the monotonic property. Each pop answers the question “what’s the next greater (or smaller) element?” for the popped item. This converts an O(n²) nested-loop search into a single O(n) pass. Daily Temperatures (“how many days until a warmer day?”) and Largest Rectangle in Histogram both use this pattern.

  • Monotonic decreasing stack: Use when you need the next greater element. Push elements; pop when the current element is greater than the top.
  • Monotonic increasing stack: Use when you need the next smaller element. Push elements; pop when the current element is smaller than the top.

Stacks also appear implicitly whenever you implement DFS iteratively: you’re simulating the call stack that recursion uses automatically.

Common Mistakes with Stacks

  • Popping from an empty stack (always check len(stack) > 0 before popping)
  • Not recognizing that DFS problems can be solved iteratively with a stack
  • Confusing when to use a monotonic increasing vs. decreasing stack

Practice Problems: Stacks


Queues

Queues follow First-In, First-Out (FIFO), and their primary interview application is BFS (Breadth-First Search). Any time a problem asks about shortest paths, level-order traversal, or spreading outward from a source (fire spreading, water flowing, infection propagating), a queue is the right structure.

How Queues Appear in Interviews

BFS visits nodes layer by layer, making it the natural choice for “minimum number of steps” problems: the first time you reach a node in BFS, you’ve taken the shortest path.

One thing interviewers watch for: do you know why BFS finds shortest paths but DFS doesn’t (in unweighted graphs)? Articulating this distinction demonstrates genuine understanding.

Deques (Double-Ended Queues): Python’s collections.deque supports O(1) appends and pops from both ends. The monotonic deque pattern uses this to maintain a window of candidates in sorted order. In Sliding Window Maximum, you keep a deque of indices where values are monotonically decreasing. As the window slides right, you pop from the back any indices whose values are smaller than the incoming element (they can’t be the max), and pop from the front any indices that have fallen outside the window. The front of the deque always holds the current window’s maximum. This gives O(n) total instead of O(n*k) brute force.

Common Mistakes with Queues

  • Using a Python list as a queue (O(n) for pop(0)) instead of collections.deque (O(1) for popleft())
  • Forgetting to mark nodes as visited when enqueuing, not when dequeuing, which causes duplicate processing
  • Not recognizing multi-source BFS problems (starting from multiple sources simultaneously)

Practice Problems: Queues


Heaps (Priority Queues)

Min Heap: The Smallest Always Wins

Heaps are a specialized tree-based structure that efficiently maintain the minimum or maximum element at the top. Their superpower in interviews: solving top-k problems. Anytime you need to repeatedly find or maintain the k largest, smallest, or most frequent elements, a heap is the right choice.

How Heaps Appear in Interviews

The pattern trigger: “I need the k-th something” or “I’m processing a stream and need to maintain ordering.”

The dual-heap pattern (max-heap for the lower half, min-heap for the upper half) is a classic for the “Find Median from Data Stream” problem. This pattern appears less frequently but is a strong differentiator for senior candidates.

In Python, heapq implements a min-heap. To simulate a max-heap, negate values. In Java, PriorityQueue defaults to min-heap but accepts a custom comparator. Knowing your language’s heap API cold is a small but meaningful interview signal.

Common Mistakes with Heaps

  • Using a full sort (O(n log n)) when a heap of size k gives O(n log k)
  • Forgetting that Python’s heapq is a min-heap and trying to use it as a max-heap without negation
  • Not recognizing that “merge k sorted things” is a heap problem

Practice Problems: Heaps


Trees

Recursion: Trust the Process

Tree problems are among the most important data structures for coding interviews, particularly for mid-to-senior roles. Binary trees and BSTs appear everywhere, and interviewers love them because they naturally test recursive thinking and base case reasoning.

How Trees Appear in Interviews

Most tree problems are solved with DFS (usually recursive) or BFS (using a queue). For any tree problem, ask: “What do I need from my left subtree? From my right subtree? How do I combine them?” That framing will crack most tree problems.

BST properties are a separate category. Binary search trees have the useful property that in-order traversal produces a sorted sequence. This property is the key to “validate a BST” or “find the kth smallest element.”

When you see a tree problem, verbalize your traversal strategy before coding: pre-order, in-order, post-order DFS, or BFS? Why? That narration is what interviewers listen for.

Common Mistakes with Trees

  • Forgetting the base case (if not node: return ...) or getting it wrong
  • Not recognizing when BFS (level-order) is cleaner than DFS
  • Confusing pre-order, in-order, and post-order traversal (each has different use cases)
  • Not leveraging BST properties when the tree is explicitly a BST

Practice Problems: Trees


Graphs

BFS vs DFS: Choose Your Adventure

Graph problems represent the deepest test of algorithmic thinking in most coding interviews. They require modeling the problem correctly (recognizing that it is a graph problem), choosing the right algorithm, and implementing cleanly. Candidates who handle graph problems well tend to clear the bar for senior roles.

How Graphs Appear in Interviews

Many interview problems are graph problems in disguise. A grid of 0s and 1s? That’s a graph where each cell is a node connected to its neighbors. Course prerequisites? That’s a directed acyclic graph. Word transformations? That’s an implicit graph where edges connect words differing by one letter.

Core graph algorithms for interviews:

  • DFS on Graphs: Connectivity, cycle detection, topological exploration. Unlike tree DFS, you must track visited nodes explicitly.
  • BFS on Graphs: Shortest path in unweighted graphs. Multi-source BFS (starting from multiple nodes) is a powerful variant.
  • Topological Sort: Dependencies and ordering (course prerequisites). Implemented via DFS or Kahn’s algorithm (BFS with in-degrees).
  • Dijkstra’s: Weighted graphs with non-negative edges. In most coding interviews, the graph is unweighted and BFS suffices.

Common Mistakes with Graphs

  • Not clarifying the graph structure before coding: directed or undirected? Weighted or unweighted? Cycles? Connected? These questions signal maturity.
  • Forgetting the visited set, causing infinite loops on cyclic graphs
  • Using DFS for shortest-path problems (BFS is correct for unweighted graphs)
  • Not recognizing that a grid problem is a graph problem

Practice Problems: Graphs


Tries (Prefix Trees)

Tries are specialized tree structures where each node represents a character, and paths from root to leaf spell out complete strings. They’re the right tool whenever a problem involves prefix matching, autocomplete, or searching a dictionary of words.

How Tries Appear in Interviews

The trigger: any problem that asks you to search, insert, or match strings by prefix. Standard hash maps can tell you “does this exact word exist?” but tries can answer “what words start with this prefix?” in O(prefix length) time.

The classic interview problem is Implement Trie, which tests whether you can build the insert, search, and startsWith operations from scratch. Word Search II extends this by combining a trie with DFS backtracking on a grid.

Common Mistakes with Tries

  • Building a trie when a hash set would suffice (if you only need exact-match lookup, a trie is overkill)
  • Forgetting the is_end flag that marks word boundaries vs. mere prefixes
  • Not recognizing when a problem’s constraints (e.g., “find all words that share a prefix”) point to a trie over sorting or hashing

Practice Problems: Tries


Union Find (Disjoint Set)

Union Find tracks a collection of elements partitioned into non-overlapping groups. It supports two operations: find (which group does this element belong to?) and union (merge two groups). With path compression and union by rank, both operations run in nearly O(1) amortized time.

How Union Find Appears in Interviews

The trigger: “connected components,” “grouping,” or any problem where you need to dynamically merge sets and query connectivity. While BFS/DFS can also find connected components, Union Find is often cleaner when connections arrive incrementally (one edge at a time) or when you need to count the number of distinct groups.

The canonical interview problem is Accounts Merge: emails belonging to the same person form a group, and you merge groups as you discover shared emails. Number of Provinces and Redundant Connection are other common examples.

Common Mistakes with Union Find

  • Not implementing path compression, which degrades performance from near-O(1) to O(n) in the worst case
  • Forgetting union by rank (always attach the smaller tree under the larger one)
  • Using Union Find when a simple BFS/DFS traversal would be simpler and equally efficient
  • Not tracking the number of connected components (decrement a counter on each successful union)

Practice Problems: Union Find

  • Accounts Merge
  • Number of Provinces
  • Redundant Connection
  • Graph Valid Tree

Time Complexity of Common Data Structures

Knowing the time complexity of every operation on every structure is a baseline expectation in coding interviews. Interviewers won’t always ask directly, but they’ll notice if you use a data structure in a way that contradicts its complexity profile.

Data StructureAccessSearchInsertDeleteNotes
ArrayO(1)O(n)O(n)O(n)O(1) append amortized; insert/delete shift elements
Hash MapO(1) avgO(1) avgO(1) avgO(n) worst case due to collisions
Hash SetO(1) avgO(1) avgO(1) avgSame as hash map without values
StackO(n)O(n)O(1)O(1)Push/pop from top only
Queue (deque)O(n)O(n)O(1)O(1)Enqueue/dequeue from ends
Linked ListO(n)O(n)O(1)*O(1)**O(1) insert/delete at known position; O(n) to find position
HeapO(1) topO(n)O(log n)O(log n)Min/max at top; heapify is O(n)
BST (balanced)O(log n)O(log n)O(log n)O(log n)Degrades to O(n) if unbalanced
TrieO(k)O(k)O(k)k = length of string
Union FindO(α(n))α(n) ≈ O(1) amortized with path compression

How to use this in interviews: when you propose a solution, state the complexity of the operations you’re using. “I’ll use a hash map for O(1) lookups, which gives O(n) total” is the kind of sentence interviewers want to hear. If you’re using Python, the Python Data Structures Companion covers the specific built-in tools and their performance characteristics.


Common Interview Patterns Using Data Structures

Data structures and algorithms aren’t separate topics. In practice, every coding interview problem combines a data structure with an algorithmic pattern. Recognizing these pairings is the single biggest unlock for interview performance.

PatternPrimary Data StructuresWhen to Use
Two PointersArrays, stringsSorted input, pair-finding, partitioning
Sliding WindowArrays, strings, hash mapsContiguous subarray/substring optimization
BFSQueues, graphs, treesShortest path, level-order, spreading problems
DFSStacks, graphs, treesConnectivity, exhaustive search, backtracking
Top-K / HeapHeaps, arraysk-th largest/smallest, streaming data, merging sorted inputs
Interval ProblemsArrays, heapsOverlapping ranges, scheduling, merging intervals
Monotonic Stack/QueueStacks, dequesNext greater/smaller element, histogram problems
Hash Map + Prefix SumHash maps, arraysSubarray sums, count of valid subarrays
BacktrackingArrays, strings, graphsGenerating combinations, permutations, constraint satisfaction
Dynamic ProgrammingArrays, hash mapsOverlapping subproblems, optimization, counting paths
Binary SearchSorted arraysSearch space reduction, finding boundaries, min/max optimization

The key insight: you don’t pick a data structure and then an algorithm. You read the problem constraints, recognize the pattern, and the data structure follows naturally. If a problem asks for the minimum number of steps, you’re probably doing BFS with a queue. If it asks for the k-th largest element in a stream, you’re using a min-heap.

For examples of what this pattern recognition looks like in a live interview, see our interview walkthrough series.


How Interviewers Evaluate Your Data Structure Knowledge

Understanding data structures is necessary but not sufficient. Interviewers watch how you use that knowledge live.

SignalWhat Interviewers Look For
Choosing the correct structureDo you reach for the right tool without prompting? Do you explain why?
Explaining complexityCan you state Big-O accurately and discuss tradeoffs?
Handling edge casesEmpty inputs, single elements, cycles, disconnected components?
Communicating reasoningDo you think out loud, narrate decisions, and explain before coding?
Adapting when pushedIf the interviewer adds a constraint (“what if memory is limited?”), can you pivot?
Clean implementationReadable code, meaningful variable names?
Knowing limitsDo you understand when your chosen structure breaks down?

The most common interviewer feedback: “They had the right idea but couldn’t explain it.” Communication is part of the evaluation criteria, not a soft skill on the side.

To understand exactly how this evaluation plays out in a real interview, read our Anatomy of an AI Interview breakdown.


Common Data Structure Interview Mistakes

Even well-prepared candidates fall into predictable traps:

  1. Reaching for a familiar structure instead of the right one. Many candidates default to arrays or recursion because they’re comfortable, even when a hash map or heap would be cleaner. Ask yourself: “Is there a structure that makes this easier?”

  2. Over-engineering the solution. Not every problem needs a heap or a graph. Start simple and correct, then optimize.

  3. Ignoring time and space complexity. Writing a correct O(n²) solution when an O(n log n) one exists, without acknowledging the gap, suggests you don’t think about efficiency habitually.

  4. Coding in silence. Interviewers can’t evaluate your thinking if you don’t share it. Practice explaining your reasoning in mock interviews.

  5. Skipping edge cases. Empty arrays, single nodes, null inputs, cycles in graphs.

  6. Misusing built-ins. Using a Python list as a queue (O(n) for dequeue) instead of collections.deque (O(1)) is the kind of detail sharp interviewers notice.

  7. Going silent when stuck. Instead, verbalize: “I’m not immediately seeing how to avoid the nested loop. Let me think about what structure could help…” That shows far more potential than silence.


A Four-Week Data Structures Study Plan

Knowing which data structures to learn for coding interviews is one thing. Building fluency with them is another. This four-week plan builds skills in the right order, where each week builds on the previous one.

Week 1: Arrays and Hash Maps

Start here because arrays are universal and hash maps are the most frequently leveraged structure for improving brute-force solutions. Master two pointers, sliding window, and prefix sums. Practice using hash maps to convert O(n²) solutions to O(n).

Target problems: Two Sum, Best Time to Buy and Sell Stock, Longest Substring Without Repeating Characters, Group Anagrams, Subarray Sum Equals K.

Week 1 checkpoint: You can look at a new array problem and identify within two minutes whether it’s a two-pointer, sliding window, or prefix sum pattern.

Week 2: Linked Lists and Stacks

Linked lists build careful pointer reasoning, a skill that directly transfers to trees and graphs. Stacks introduce LIFO thinking and prepare you for more advanced patterns like monotonic stacks.

Target problems: Reverse Linked List, Merge Two Sorted Lists, Linked List Cycle, Valid Parentheses, Daily Temperatures, Min Stack.

Week 2 checkpoint: You can reverse a linked list in place without looking anything up, and you can identify when a stack is the right tool within 30 seconds.

Week 3: Trees and Tries

Trees are where recursion becomes essential. Spend this week building recursive intuition: defining problems in terms of smaller subproblems. Practice both DFS and BFS approaches on the same problems to understand when each is preferable. Tries fit naturally here since they’re tree structures. Implement a trie from scratch to reinforce how tree-based node traversal works.

Target problems: Maximum Depth of Binary Tree, Binary Tree Level Order Traversal, Lowest Common Ancestor, Validate BST, Invert Binary Tree, Implement Trie.

Week 3 checkpoint: For any tree problem, you can articulate “I need X from my left subtree and Y from my right subtree” before writing a line of code. You can implement a basic trie with insert, search, and startsWith.

Week 4: Graphs, Heaps, and Union Find

Graph problems require combining everything you’ve learned: DFS, BFS, visited sets, hash maps. Heaps round out your toolkit for top-k and streaming problems. Union Find adds a clean way to handle dynamic connectivity and grouping problems that would be awkward with BFS/DFS alone.

Target problems: Number of Islands, Course Schedule I & II, Clone Graph, Kth Largest Element, Find Median from Data Stream, Merge K Sorted Lists, Accounts Merge.

Week 4 checkpoint: You can model a new problem as a graph (identify the nodes and edges), choose between BFS, DFS, and Union Find, and implement cleanly with a visited set or disjoint set structure.

Why This Progression Works

Each week adds complexity while reinforcing prior skills. By Week 4, you’re not learning graphs in isolation: you’re applying DFS and BFS (from trees), visited tracking (from linked lists), and hash maps (from Week 1) all at once.

After completing this plan, revisit hard versions of earlier problems. Difficulty isn’t linear. Returning to “easy” categories with a sharper eye often reveals nuances you missed.

For a more detailed study plan with practice links for every problem, see The Grind 75 Pathway.


Frequently Asked Questions

Which data structures appear most in coding interviews?

Arrays and hash maps appear in roughly 60% of all coding interview problems. Trees (binary trees and BSTs) and graphs are the next most common, especially at the medium-to-hard difficulty range. Stacks, heaps, and linked lists round out the top tier. If you’re short on time, prioritize arrays, hash maps, and trees.

How many data structures should I learn for interviews?

Eight: arrays, hash maps, linked lists, stacks, queues, heaps, trees, and graphs. These cover the vast majority of interview questions at every company. Tries appear occasionally in string and autocomplete problems (see our Implement Trie walkthrough), and Union Find shows up in connected-component problems, but the eight core structures handle the majority of what you’ll encounter.

Do I need to implement data structures from scratch?

For most interviews, no. You’ll use your language’s built-in implementations (dict, list, deque in Python; HashMap, PriorityQueue in Java). The exception is linked lists (you often build the node class yourself) and LRU Cache (which tests whether you can compose a hash map with a doubly linked list). Know the time complexity of every operation on every structure you use.

What’s the fastest way to learn data structures for interviews?

Follow a structured study plan rather than grinding random problems. Start with arrays and hash maps (Week 1), then linked lists and stacks (Week 2), trees (Week 3), and graphs and heaps (Week 4). For each structure, solve 5-6 problems across different patterns, then do a mock interview to test yourself under pressure.

Should I learn algorithms separately from data structures?

No. Algorithms and data structures are inseparable in interviews. BFS is meaningless without a queue. Two pointers is meaningless without a sorted array. Study them together: learn a data structure, then immediately practice the algorithmic patterns that use it.

Which data structures are asked most in FAANG interviews?

Arrays and hash maps dominate across all FAANG companies. Trees (especially binary trees and BSTs) are heavily tested at Google and Meta for mid-level roles. Graphs appear frequently at Amazon and Google for senior roles. Heaps show up in streaming and system-design-flavored coding rounds. The distribution is roughly: arrays/hash maps (60%), trees (20%), graphs/heaps (15%), everything else (5%). See our How to Prepare for a Coding Interview guide for company-specific preparation strategies.

Are graphs important for coding interviews?

Yes, especially for mid-level and senior roles. Graph problems test deeper algorithmic thinking: modeling the problem correctly, choosing between BFS and DFS, handling cycles, and implementing visited sets. Many interview problems are graphs in disguise — grids, dependencies, transformations. If you’re targeting senior roles, graph fluency is non-negotiable. Start with Number of Islands and Course Schedule.


Data Structures Cheat Sheet for Coding Interviews

A compact reference for quick review. Pin this before your interview.

Data StructureWhen to UseKey Pattern
ArrayIndexed access, sequential data, in-place manipulationTwo pointers, sliding window, prefix sums
Hash MapO(1) lookup, frequency counting, deduplicationTwo Sum variants, grouping, caching
Hash SetMembership testing, removing duplicatesExistence checks, visited tracking
StackLIFO problems, matching, nesting, undo behaviorValid parentheses, monotonic stack
Queue / DequeFIFO problems, BFS, level-order traversalShortest path, spreading, sliding window max
Linked ListSequential access, pointer manipulationReversal, cycle detection, merging
HeapTop-k, streaming min/max, priority schedulingKth largest, merge k sorted, median
TreeHierarchical data, recursive subproblemsDFS traversal, BST properties, LCA
GraphConnectivity, dependencies, grid problemsBFS/DFS, topological sort, components
TriePrefix matching, autocomplete, dictionary lookupPrefix search, word building
Union FindDynamic grouping, connectivity queriesConnected components, cycle detection

Practice Under Real Interview Conditions

Reading solutions and understanding them is a completely different skill from producing solutions under interview conditions.

When you read an editorial, you have unlimited time, no one watching, and the answer already in front of you. A live interview is a different environment entirely. You’re expected to verbalize your thinking in real time, handle unexpected follow-ups, manage stress, and produce clean code without an IDE. That’s a performance skill, not just a knowledge skill.

The most effective way to build this fluency is through mock interviews: practice where you explain your reasoning out loud, respond to follow-ups, and get feedback on both your approach and your communication.

The candidates who get offers aren’t the ones who solved the most problems. They’re the ones who practiced explaining their data structure choices under pressure, who could articulate why a hash map is O(n) while nested loops are O(n²), and who stayed calm when the interviewer added a constraint they hadn’t seen before.

Reading is preparation. Performance is the test. The sooner you start combining both, the more confident you’ll be when it counts.

Practice any data structure problem as a live mock interview on Intervu.


Browse All Interview Walkthroughs

Each walkthrough shows how a data structure problem unfolds in a real coding interview — from clarifying questions through brute force to optimization. Browse the full collection →

Arrays & Strings Two Sum · Best Time to Buy and Sell Stock · Contains Duplicate · Majority Element · Valid Palindrome · Valid Anagram · Ransom Note · Add Binary · Container With Most Water · 3Sum · Sort Colors · Product of Array Except Self · Spiral Matrix · Trapping Rain Water

Strings & Sliding Window Longest Palindrome · Longest Substring Without Repeating Characters · Find All Anagrams in a String · Longest Palindromic Substring · Minimum Window Substring · String to Integer (atoi)

Hash Maps LRU Cache · Time Based Key-Value Store

Linked Lists Reverse Linked List · Merge Two Sorted Lists · Linked List Cycle · Middle of the Linked List · Merge K Sorted Lists

Stacks Valid Parentheses · Min Stack · Implement Queue using Stacks · Evaluate Reverse Polish Notation · Largest Rectangle in Histogram · Basic Calculator

Queues & BFS Binary Tree Level Order Traversal · Rotting Oranges · 01 Matrix · Word Ladder

Heaps K Closest Points to Origin · Task Scheduler · Find Median from Data Stream · Merge K Sorted Lists

Trees Maximum Depth of Binary Tree · Invert Binary Tree · Diameter of Binary Tree · Balanced Binary Tree · Lowest Common Ancestor of a BST · Lowest Common Ancestor · Validate Binary Search Tree · Kth Smallest Element in a BST · Binary Tree Right Side View · Construct Binary Tree from Preorder and Inorder · Serialize and Deserialize Binary Tree · Minimum Height Trees

Graphs Flood Fill · Number of Islands · Clone Graph · Course Schedule · Meeting Rooms · Accounts Merge

Tries Implement Trie · Word Search

Binary Search Binary Search · First Bad Version · Search in Rotated Sorted Array · Insert Interval · Merge Intervals

Dynamic Programming Climbing Stairs · Maximum Subarray · Coin Change · Unique Paths · Word Break · Partition Equal Subset Sum · Maximum Profit in Job Scheduling

Backtracking Subsets · Permutations · Combination Sum · Letter Combinations of a Phone Number


Further reading:

Practice Like It's the Real Interview

Get instant feedback on your approach, communication, and code — powered by AI.

Start a Mock Interview →