Interview kitsBlog

Your dream job? Lets Git IT.
Interactive technical interview preparation platform designed for modern developers.

XGitHub

Platform

  • Categories

Resources

  • Blog
  • About the app
  • FAQ
  • Feedback

Legal

  • Privacy Policy
  • Terms of Service

© 2026 LetsGit.IT. All rights reserved.

Algorithms

Recruitment and knowledge question base. Filter, search and test your knowledge.

Topics

Explain Binary Search.

easysearchbinary-searchalgorithm+1
Open question

Answer

Binary search finds a target in a sorted array/list by comparing with the middle element and discarding half of the range each step. It requires sorted input and runs in O(log n) time (iteratively using O(1) extra space).

QuickSort vs MergeSort?

mediumsortingquicksortmergesort+2
Open question

Answer

Both are divide‑and‑conquer sorts. QuickSort partitions around a pivot, is in‑place and very fast on average O(n log n), but has O(n²) worst‑case and is not stable. MergeSort splits and merges, is stable with guaranteed O(n log n) worst‑case, but needs O(n) extra memory and works well for linked lists/external sorting.

BFS vs DFS?

mediumgraphbfsdfs+2
Open question

Answer

BFS explores a graph level by level using a queue and finds the shortest path in unweighted graphs. DFS explores as deep as possible using recursion/stack; it’s useful for topological sort, cycle detection and often uses less memory on wide graphs.

What is Dijkstra's Algorithm?

hardgraphshortest-pathdijkstra+1
Open question

Answer

Dijkstra’s algorithm computes shortest paths from a single source in a graph with non‑negative edge weights. It repeatedly picks the closest unvisited node (usually via a priority queue) and relaxes outgoing edges, achieving O((V+E) log V) time with a heap.

What is Dynamic Programming?

harddynamic-programmingoptimizationmemoization
Open question

Answer

Dynamic programming solves a problem by solving smaller subproblems and saving their results so you don’t recompute them. Use it when subproblems overlap and the best solution can be built from best sub‑solutions (memoization/top‑down or a bottom‑up table).

BFS vs DFS — what’s the difference and when to use which?

easybfsdfsgraph+1
Open question

Answer

BFS explores level by level and finds the shortest path in an unweighted graph. DFS goes deep first and is handy for traversals, cycle detection, and topological-style problems.

What does Big-O describe?

easybig-ocomplexityperformance
Open question

Answer

Big-O describes how time or memory grows with input size n (the growth rate). It helps compare algorithms independent of hardware.

What is the two pointers technique?

easytwo-pointersarraytechnique
Open question

Answer

You keep two indices (e.g., left/right) and move them based on a rule, often on a sorted array or a sliding window. It can reduce nested loops to O(n) in many problems.

What does it mean that a sort is stable, and why does it matter?

mediumsortingstable-sortalgorithm
Open question

Answer

A stable sort keeps the relative order of elements with equal keys. It matters when you sort by multiple fields (e.g., sort by last name, then by first name).

Greedy vs dynamic programming — what’s the key difference?

mediumgreedydynamic-programmingoptimization
Open question

Answer

Greedy makes the best local choice at each step and never revisits it; it works only when the problem has the greedy-choice property. DP solves overlapping subproblems and reuses results (memoization/tabulation) to guarantee optimality.

Common binary search pitfalls — name one and how to avoid it.

mediumbinary-searchbugsoverflow
Open question

Answer

Common bugs are inconsistent bounds (infinite loop) and overflow when computing mid. Use `mid = left + (right - left) / 2` and be consistent with inclusive/exclusive ranges.

When can BFS replace Dijkstra’s algorithm?

hardshortest-pathbfsdijkstra+1
Open question

Answer

If all edges have equal weight (e.g., weight 1), BFS finds shortest paths. If weights are only 0 or 1, you can use 0–1 BFS with a deque; otherwise you need Dijkstra.

What is a topological sort and when is it possible?

hardtopological-sortdaggraph
Open question

Answer

It orders vertices so every directed edge u→v goes from earlier to later. It exists only for a DAG (directed acyclic graph); any cycle means no valid topological order.

What does Kadane’s algorithm solve?

hardkadanedynamic-programmingarray
Open question

Answer

It finds the maximum sum of a contiguous subarray in O(n) time by tracking the best sum ending at the current position and the best overall.

KMP vs naive substring search — what’s the core idea of KMP?

hardkmpstring-searchpattern-matching
Open question

Answer

KMP precomputes a prefix function (LPS) so when a mismatch happens, it shifts the pattern without re-checking characters. This makes search O(n+m) instead of O(n·m).

What is recursion and what is a base case?

easyrecursionbase-casecall-stack
Open question

Answer

Recursion is when a function solves a problem by calling itself on a smaller input. The base case is the stopping condition that does not recurse, so the calls eventually end.

What is memoization and when does it help?

mediummemoizationdynamic-programmingcache
Open question

Answer

Memoization caches results of a function for given inputs, so repeated calls reuse the cached value. It helps when you have overlapping subproblems (common in dynamic programming) and the same states are computed many times.

What is Quickselect and what is its average time complexity?

mediumquickselectpartitionselection+1
Open question

Answer

Quickselect finds the k-th smallest element by partitioning like QuickSort, but recursing only into one side. Average time is O(n), worst case is O(n²) (bad pivots).

Why doesn’t Dijkstra work with negative edge weights, and what can you use instead?

harddijkstrabellman-fordnegative-weights+1
Open question

Answer

Dijkstra assumes that once a node has the smallest known distance, it will never improve later — negative edges break this assumption. Use Bellman–Ford for negative weights (and it can detect negative cycles).

What is a monotonic stack and what kind of problems does it solve?

hardstackmonotonic-stacknext-greater+1
Open question

Answer

A monotonic stack keeps elements in increasing or decreasing order. It’s used for “next greater/smaller element”, stock span, and histogram problems, often in O(n) time by pushing/popping each element at most once.

What is a loop invariant and why is it useful?

easycorrectnessinvariantloops
Open question

Answer

A loop invariant is a statement that is true before and after each loop iteration. It helps you prove correctness: if it holds initially, stays true each step, and implies the goal at the end, the algorithm is correct.

Top-down vs bottom-up dynamic programming — what’s the difference?

mediumdynamic-programmingmemoizationtabulation
Open question

Answer

Top-down uses recursion with memoization (compute states on demand). Bottom-up fills a table iteratively from smaller subproblems to bigger ones. Both reuse results; choose based on clarity and memory/control needs.

Randomized pivot in QuickSort/Quickselect — why does it help?

mediumrandomizationquicksortquickselect+1
Open question

Answer

A random pivot makes it unlikely to hit adversarial inputs that cause worst-case O(n²). In practice it gives good expected performance and avoids patterns like already-sorted arrays producing bad pivots.

A* vs Dijkstra — what’s the difference and when is A* faster?

harda-stardijkstraheuristics+1
Open question

Answer

Dijkstra expands nodes by current distance. A* adds a heuristic `h(n)` (estimated remaining cost) and prioritizes `g(n)+h(n)`. With an admissible heuristic (never overestimates), A* is optimal and usually explores fewer nodes, so it can be faster.

Kruskal vs Prim for MST — how do they differ?

hardmstkruskalprim+2
Open question

Answer

Kruskal sorts edges and adds the smallest ones that don’t create cycles (uses Union-Find). Prim grows a tree from a start node using a priority queue of edges. Both can be O(E log E) / O(E log V) depending on implementation.

Backtracking: what is it and when do you use it?

easybacktrackingsearchpruning+1
Open question

Answer

Backtracking is a depth-first search over a decision tree: you build a partial solution, and when it becomes invalid you undo the last choice and try another. It’s used for problems like permutations, N-Queens, Sudoku, and subset search with pruning.

What does amortized O(1) mean? Explain with dynamic array growth.

mediumamortizedcomplexitydynamic-array+1
Open question

Answer

Amortized means “average cost per operation over a whole sequence”, even if some single operations are expensive. In a dynamic array, most appends are O(1), and once in a while you pay O(n) to resize/copy—spread across many appends it becomes O(1) amortized.

Counting sort: when can it be faster than O(n log n) sorting?

mediumcounting-sortsortingstability+1
Open question

Answer

Counting sort is good when keys are integers in a small range 0..k. It runs in O(n + k) by counting occurrences and building the output, and it can be stable (useful as a building block for radix sort).

NP-hard vs NP-complete: what's the difference?

hardcomplexity-theorynp-hardnp-complete+1
Open question

Answer

NP-complete problems are both in NP (a solution can be verified in polynomial time) and NP-hard. NP-hard means “at least as hard as NP problems” but it might not be in NP (e.g., optimization versions). If any NP-complete problem has a polynomial-time algorithm, then P = NP.

What does the Floyd–Warshall algorithm compute and what is its complexity?

hardgraphsshortest-pathfloyd-warshall+1
Open question

Answer

Floyd–Warshall computes shortest paths between all pairs of vertices. It runs in O(V^3) time and uses O(V^2) memory, so it’s practical mainly for smaller or dense graphs. It can handle negative edges, but not negative cycles.

What is a prefix sum array and what does it speed up?

easyprefix-sumrange-querypreprocessing
Open question

Answer

A prefix sum array stores sums up to each index. After O(n) preprocessing, you can answer range sum queries in O(1): sum[l..r] = prefix[r] - prefix[l-1]. It’s also used for counting frequencies and quick “how many in a range” queries.

Sliding window: what is it and when is it better than nested loops?

mediumsliding-windowtwo-pointerscomplexity
Open question

Answer

Sliding window keeps a moving range [l..r] and updates it in one pass. You expand `r` and move `l` to maintain a condition (e.g., sum <= X, at most K distinct). Many problems become O(n) instead of O(n^2) because each pointer moves forward at most n times.

Rabin–Karp: what is a rolling hash and what is the main caveat?

mediumstringhashingrabin-karp+1
Open question

Answer

A rolling hash lets you update the hash of a window when you shift by one character (remove left, add right). Rabin–Karp compares hashes to find candidate matches, then usually verifies the substring to avoid false positives. Caveat: hashes can collide, so without verification it’s not guaranteed correct.

What is a monotonic queue and how does it solve sliding window max in O(n)?

harddequemonotonic-queuesliding-window+1
Open question

Answer

A monotonic queue is a deque that keeps elements in decreasing (or increasing) order. For sliding window maximum, you pop smaller elements from the back when adding a new one, so the front is always the max. You also pop from the front when it leaves the window. Each element is added and removed at most once, so total time is O(n).

Bitmask DP (subset DP): what is it and what is a typical complexity?

harddpbitmasksubset+2
Open question

Answer

Bitmask DP uses a bitmask to represent a subset (e.g., which nodes are visited). A common form is dp[mask][i] = best result for subset `mask` ending at `i` (used in problems like TSP). Typical complexity is exponential, often O(n^2 * 2^n) time and O(n * 2^n) memory.

Greedy algorithms: what property makes a greedy choice correct?

mediumgreedycorrectnessexchange-argument+1
Open question

Answer

A greedy algorithm is correct when the problem has the greedy-choice property and optimal substructure. Intuitively, you can prove that making the locally best choice can be exchanged into an optimal solution (exchange argument), so the local choice leads to a global optimum.

KMP string search: how does the LPS/prefix table avoid re-checking characters?

hardkmpstringpattern-matching+1
Open question

Answer

KMP precomputes the longest proper prefix that is also a suffix (LPS) for each pattern prefix. On mismatch, it shifts the pattern using the LPS value instead of moving the text pointer back, so characters in the text are not re-checked.

Bellman–Ford: when do you use it and what extra capability does it have over Dijkstra?

mediumbellman-fordshortest-pathgraphs+1
Open question

Answer

Use Bellman–Ford when you have negative edge weights. It can detect negative cycles by doing one extra relaxation round. The trade‑off is slower time complexity: O(V·E).

Kruskal vs Prim: what is the main difference and typical data structures?

mediummstkruskalprim+2
Open question

Answer

Kruskal builds the MST by sorting edges and adding them if they don’t create a cycle (usually with DSU/Union‑Find). Prim grows the MST from a start node using a priority queue of edges. Kruskal is often good for sparse graphs; Prim is convenient when you have adjacency lists.

Quickselect: what is it used for and what is its expected complexity?

mediumquickselectselectionpartition+1
Open question

Answer

Quickselect finds the k‑th smallest/largest element without fully sorting the array. It uses partitioning like quicksort and runs in expected O(n) time, but can degrade to O(n^2) in the worst case.

Binary search on answer (parametric search): when is it applicable?

mediumbinary-searchparametric-searchmonotonic+1
Open question

Answer

It applies when you can define a monotonic predicate over the answer space (e.g., “is there a solution with value ≤ X?”). Then you can binary search the minimal/maximal X that satisfies the predicate.

Floyd’s cycle detection (tortoise and hare): what does it detect and what are its time/space costs?

easycycle-detectiontortoise-harelinked-list+1
Open question

Answer

It detects cycles in a linked list or any iterative sequence by moving one pointer twice as fast as the other. If they meet, there’s a cycle. It runs in O(n) time and O(1) extra space.

Heap sort: what are its time complexity, space complexity, and stability?

mediumheapsortsortingcomplexity+1
Open question

Answer

Heapsort runs in O(n log n) time, uses O(1) extra space (in‑place), and is not stable (equal elements can change order).

A* search: how does the heuristic affect optimality?

harda-starheuristicshortest-path+1
Open question

Answer

A* uses f(n) = g(n) + h(n). If the heuristic h is admissible (never overestimates) and consistent, A* is optimal and explores fewer nodes than Dijkstra. If h overestimates, optimality is not guaranteed. With h = 0, A* reduces to Dijkstra.

Boyer–Moore majority vote: what does it solve and what’s the core idea?

mediummajority-votearraylinear-time+1
Open question

Answer

It finds the element that appears more than n/2 times (if it exists). The idea is to cancel pairs of different elements; the remaining candidate is the majority. It runs in O(n) time and O(1) space.