Interview kitsBlog

Your dream job? Lets Git IT.
Interactive technical interview preparation platform designed for modern developers.

XGitHub

Platform

  • Categories

Resources

  • Blog
  • About the app
  • FAQ
  • Feedback

Legal

  • Privacy Policy
  • Terms of Service

© 2026 LetsGit.IT. All rights reserved.

Data Structures

Recruitment and knowledge question base. Filter, search and test your knowledge.

Topics

Difference between Array and LinkedList?

easyarraylinked-listcomparison+1
Open question

Answer

An array (or ArrayList/dynamic array) stores elements next to each other in memory, so random access is O(1). Inserts/deletes in the middle are O(n) because elements must shift (and resizing may copy). A LinkedList stores nodes linked by pointers: inserts/deletes at a known node are O(1), but random access is O(n) and it uses extra memory.

Example of valid JSON structure

easyjsonformatexample
Open question

Answer

Valid JSON is a text format for objects and arrays using {} and [], where keys and string values are in double quotes, and values can be only: string, number, boolean, null, array, or object. Example: { "name": "John", "age": 30, "roles": ["dev"], "active": true }.

Stack vs Queue?

easystackqueuedata-structure+1
Open question

Answer

A stack is a LIFO structure where you push/pop at the top, used for call stacks, undo, and depth‑first search. A queue is FIFO where you enqueue at the tail and dequeue at the head, used for scheduling, buffering, and breadth‑first search.

How does a HashMap work internally?

mediumhashmaphashingcollision+1
Open question

Answer

A HashMap stores key–value pairs in buckets. It uses key.hashCode() to pick a bucket, and equals() to find the right key inside that bucket. Collisions are kept in a list/tree; when the load factor threshold is exceeded it resizes and rehashes to keep average lookups near O(1).

What are Balanced Trees (e.g., AVL, Red-Black)?

hardtreebinary-search-treebalancing+1
Open question

Answer

Balanced trees such as AVL or Red‑Black are self‑balancing binary search trees that keep height proportional to log n by performing rotations after inserts and deletes. This guarantees search/insert/delete in O(log n) even in the worst case.

What is a Set and when would you use it?

easysetdeduplicationmembership
Open question

Answer

A Set stores unique values (no duplicates). Use it for fast membership checks and deduplication (e.g., keep unique user IDs).

What is a priority queue?

easypriority-queueheapbig-o
Open question

Answer

A priority queue removes the element with the highest (or lowest) priority first, not the oldest. It’s commonly implemented with a heap, so insert/remove is O(log n).

What is a deque (double-ended queue)?

easydequequeuering-buffer
Open question

Answer

A deque lets you add and remove elements from both the front and the back. With a ring buffer or linked list, these operations are O(1).

Adjacency list vs adjacency matrix — when to use which?

mediumgraphadjacency-listadjacency-matrix+1
Open question

Answer

A matrix uses O(V^2) memory and gives O(1) edge checks, so it’s good for dense graphs. A list uses O(V+E) memory and is better for sparse graphs and iterating neighbors.

What is a Trie and a common use case?

mediumtrieprefixautocomplete
Open question

Answer

A trie stores strings by prefixes: each node represents a character. It enables fast prefix search/autocomplete; lookup is O(L), where L is the word length.

How do hash tables handle collisions? (chaining vs open addressing)

mediumhash-tablecollisionchaining+1
Open question

Answer

With chaining, each bucket stores a list/tree of entries that share the same hash. With open addressing, collisions are resolved by probing other slots (linear/quadratic/double hashing). Chaining is simpler under higher load; open addressing is cache-friendly but needs a lower load factor.

Why is append to a dynamic array amortized O(1)?

harddynamic-arrayamortizedresizing+1
Open question

Answer

Most appends write into free capacity in O(1). Occasionally the array resizes and copies O(n) elements; spread over many appends, the average cost per append stays O(1) (amortized).

AVL vs Red-Black tree — what’s the trade-off?

hardavlred-blackbalancing+1
Open question

Answer

AVL trees keep stricter balance, so lookups are often faster, but inserts/deletes may require more rebalancing. Red-Black trees relax the balance rules, making updates cheaper while still keeping height O(log n).

What is a Bloom filter and what trade-off does it make?

hardbloom-filterprobabilistichashing+1
Open question

Answer

A Bloom filter is a probabilistic set for membership tests. It can return false positives (might say “present” when it’s not), but it never returns false negatives. It’s very memory efficient and fast, but you can’t retrieve the original items.

What problem does Union-Find (Disjoint Set Union) solve?

hardunion-finddsukruskal+1
Open question

Answer

Union-Find maintains disjoint sets with find(x) (representative) and union(a,b) (merge). It’s used for connectivity and Kruskal’s MST; with path compression + union by rank, operations are almost O(1) amortized.

What is a Map (dictionary) and when would you use it instead of an array?

easymapdictionaryhashmap+1
Open question

Answer

A Map stores values by key (key → value). Use it when you need fast lookup by an identifier (e.g., email → user) instead of by numeric index; a hash map is usually O(1) average for get/put.

What is an LRU cache and how can you implement it in O(1)?

mediumlrucachehashmap+2
Open question

Answer

LRU (Least Recently Used) evicts the item that hasn’t been used for the longest time. A common O(1) design is: HashMap for key→node lookup + doubly linked list to move items to the front on access and evict from the tail.

What is a ring buffer (circular queue) and why is it useful?

mediumring-bufferqueueproducer-consumer
Open question

Answer

A ring buffer is a fixed-size array used as a queue with head/tail pointers that wrap around (mod capacity). It’s useful for streams and producer/consumer queues because it avoids allocations and gives O(1) enqueue/dequeue.

What is a Fenwick tree (Binary Indexed Tree) used for?

hardfenwickbitprefix-sum+1
Open question

Answer

A Fenwick tree supports prefix sums and point updates in O(log n) with O(n) memory. Use it when you need many updates and queries like “sum of [0..i]” quickly (e.g., frequency/counting problems).

What is a skip list and how does it compare to balanced trees?

hardskip-listlinked-listprobabilistic+1
Open question

Answer

A skip list is a layered linked-list structure where some nodes are promoted to higher levels randomly. It gives expected O(log n) search/insert/delete (like balanced trees), is simpler to implement, but its performance is probabilistic.

What is the heap property in a binary heap?

easyheapbinary-heappriority-queue+1
Open question

Answer

In a min-heap, every node is ≤ its children (the smallest element is at the root). In a max-heap, every node is ≥ its children. This makes peek O(1) and insert/remove O(log n).

What is a segment tree and what is it good for?

mediumsegment-treerange-querybig-o
Open question

Answer

A segment tree stores aggregates over ranges (sum/min/max) and supports range queries and point updates in O(log n). It’s useful when you need many queries like “sum on [l..r]” with updates.

Hash table load factor — what is it and why does resizing happen?

mediumhash-tableload-factorrehash+1
Open question

Answer

Load factor is roughly `items / buckets`. When it gets too high, collisions increase and operations slow down. Resizing increases bucket count and rehashes entries to keep average performance near O(1).

B-tree vs binary search tree — why are B-trees common on disk?

hardb-treebstdisk-io+1
Open question

Answer

B-trees have high branching factor, so they stay shallow and reduce disk IO (fewer page reads). BSTs are pointer-heavy and deep; they are good in memory but inefficient on disk compared to page-friendly B-trees.

What is a suffix array (or suffix tree) used for?

hardstringssuffix-arraysuffix-tree+1
Open question

Answer

They are string data structures used for fast substring and pattern queries (e.g., “does pattern P appear in text T?”). They enable efficient searches and are used in text indexing, search, and bioinformatics.

Bitset/bitmap: what is it and when is it a good choice?

easybitsetbitmapmemory+1
Open question

Answer

A bitset (bitmap) stores true/false flags as bits, so it is very memory efficient. Use it for fast membership on a small, dense range of integers (e.g., IDs 0..N) and for quick bit operations (AND/OR).

What is a rope (string rope) and why would you use it?

mediumropestringdata-structure+1
Open question

Answer

A rope represents a long string as a tree of smaller chunks. It can make concatenation and insert/delete in the middle cheaper than copying whole strings, at the cost of more complexity and usually slower random indexing.

Immutable/persistent data structures: what is structural sharing?

mediumimmutablepersistentstructural-sharing+1
Open question

Answer

A persistent (immutable) data structure returns a new version on “update” instead of changing in place. Structural sharing means the new version reuses most of the old nodes, so copying is cheap (useful for undo/history and safer concurrency).

What is a sparse table and what problems is it good for?

hardsparse-tablermqpreprocessing+1
Open question

Answer

A sparse table precomputes answers for ranges of length 2^k, so you can answer some static range queries fast. For idempotent operations like min/max/gcd, queries are O(1) after O(n log n) preprocessing, but it does not support updates efficiently.

Cuckoo hashing: what is it and what trade-off does it make?

hardhashingcuckoo-hashinghash-table+1
Open question

Answer

Cuckoo hashing uses two (or more) hash functions, so each key can live in one of a few positions. Lookups are O(1) and simple, but inserts can trigger a chain of evictions; in rare cases you must rehash/resize to break a cycle.

Singly vs doubly linked list: when would you choose each?

easylinked-listsinglydoubly+1
Open question

Answer

A singly linked list stores only `next`, so it uses less memory and is simpler. A doubly linked list stores `prev` and `next`, which makes removing a known node and iterating backwards easier, but costs more memory and pointer updates. Choose singly when you only need forward traversal; choose doubly when you frequently remove nodes in the middle or need reverse traversal.

Why can a hash table resize cause latency spikes, and how can you mitigate it?

mediumhash-tablerehashlatency+1
Open question

Answer

Resizing often means allocating a bigger table and rehashing/copying all entries, which is O(n) work done at once. That can create a pause and a latency spike. Mitigations: pick a good load factor, pre-size when you know the size, use incremental rehashing (move entries gradually), or use a concurrent map implementation that spreads the work.

Building a heap from an array: why can it be O(n), not O(n log n)?

mediumheapheapifycomplexity+1
Open question

Answer

If you build a heap bottom-up (heapify), most nodes are near the leaves and move only a small distance. The total work across all nodes forms a decreasing series, which sums to O(n). Doing n inserts one-by-one is O(n log n), but bottom-up heapify is O(n).

B-tree vs B+ tree: what is the practical difference and why do databases like B+ trees?

hardb-treeb-plus-treeindexes+1
Open question

Answer

In a B+ tree, all records (or pointers to records) are stored in the leaves, and internal nodes store only keys for routing. Leaves are usually linked, so range scans are very fast. This fits disk access well: high fan-out keeps the tree shallow, and linked leaves make ordered reads efficient.

What is a radix tree (Patricia trie) and when is it better than a normal trie?

hardradix-treepatriciatrie+1
Open question

Answer

A radix tree is a compressed trie: chains of single-child nodes are merged into one edge labeled with a string segment. It saves memory and reduces pointer chasing compared to a normal trie, while keeping fast prefix lookups. It’s used for things like routing tables and prefix-based search.

What is a deque and when would you use it instead of a queue or stack?

easydequequeuestack+1
Open question

Answer

A deque (double-ended queue) lets you add and remove elements from both the front and the back. It’s useful when you need both stack- and queue-like operations, or for algorithms like sliding window/monotonic queues.

Hash table collisions: what is the difference between separate chaining and open addressing?

mediumhash-tablecollisionschaining+1
Open question

Answer

Separate chaining stores colliding keys in a bucket list (or tree). Open addressing keeps all entries in the table and probes alternative slots (e.g., linear/quadratic probing, double hashing). Chaining is simpler for deletes; open addressing can be more cache-friendly but is sensitive to load factor.

What is a trie and why is it good for prefix search/autocomplete?

mediumtrieprefixautocomplete+1
Open question

Answer

A trie stores strings as paths of characters; each edge represents a character. Prefix queries are efficient because all words with the same prefix share the same path, so lookup is O(L) where L is the key length. The trade‑off is higher memory usage compared to hashing.

What operations does a priority queue support and how is it typically implemented?

easypriority-queueheapordering+1
Open question

Answer

A priority queue supports insert, peek (min/max), and extract (min/max). It’s typically implemented with a binary heap, giving O(log n) insert/extract and O(1) peek.

Adjacency list vs adjacency matrix: when do you choose each?

mediumgraphadjacency-listadjacency-matrix+1
Open question

Answer

Use an adjacency list for sparse graphs because it uses O(V + E) memory and iterates neighbors efficiently. Use an adjacency matrix for dense graphs or when you need O(1) edge‑existence checks; it costs O(V^2) memory.

What is a segment tree and what complexity does it give for range queries and updates?

hardsegment-treerange-queryupdates+1
Open question

Answer

A segment tree is a binary tree over array segments. It supports range queries (sum/min/max) and point updates in O(log n) after an O(n) build. With lazy propagation it can handle range updates too.

Binary heap vs binary search tree: which operations are efficient?

mediumheapbstpriority-queue+1
Open question

Answer

A binary heap is great for min/max: peek is O(1) and insert/extract is O(log n). A (balanced) BST supports ordered traversal and search by key in O(log n). Heaps are not good for fast search or range queries.

Ordered map (TreeMap) vs HashMap: when would you choose an ordered map?

easymaptreemaphashmap+1
Open question

Answer

Choose an ordered map when you need keys in sorted order or range queries (e.g., all keys between A and B). TreeMap provides O(log n) operations and ordering; HashMap gives O(1) average operations but no ordering.

Sparse matrix representation: when would you use CSR/COO instead of a dense array?

mediumsparse-matrixcsrcoo+1
Open question

Answer

Use sparse formats (CSR/COO) when most entries are zero. They store only non‑zero values and their positions, which saves memory and can speed up operations like matrix‑vector multiply. The trade‑off is slower random access and more complex updates.