Implement LRU Cache
Why LRU Cache Shows Up in Every FAANG Interview
The LRU Cache is the most commonly asked data structure question at Google, Meta, Amazon, and Microsoft. Not because caches are exotic — but because it tests whether you can combine two data structures to achieve O(1) time complexity for both reads and writes. That's the real challenge.
LRU stands for Least Recently Used. When the cache is full and a new entry needs space, you evict the entry that hasn't been accessed for the longest time. Simple idea, tricky implementation.
Imagine a bookshelf that only fits 5 books. Every time you read a book, you place it on the right end. When you buy a new book and the shelf is full, you remove the book on the far left — that's the one you haven't touched in the longest time. The shelf is your cache, "reading" is accessing, and "buying a new book" is inserting.
The Interface
Every LRU cache needs exactly two operations:
get(key)— Return the value if the key exists, otherwise return -1. This also marks the key as "recently used."put(key, value)— Insert or update the key-value pair. If the cache is at capacity, evict the least recently used entry first.
Both operations must be O(1) time. That's the constraint that makes this interesting.
const cache = new LRUCache(3); // capacity = 3
cache.put(1, "a");
cache.put(2, "b");
cache.put(3, "c");
// Cache: 1→a, 2→b, 3→c (1 is least recent)
cache.get(1); // "a" — now 1 is most recent
// Cache: 2→b, 3→c, 1→a
cache.put(4, "d"); // Evicts key 2 (least recently used)
// Cache: 3→c, 1→a, 4→d
cache.get(2); // -1 (evicted)
Approach 1: The Classic — Doubly Linked List + HashMap
This is the textbook approach and what most interviewers expect. Two data structures working together:
- A HashMap (plain object or
Map) gives O(1) lookup by key - A doubly linked list maintains the access order, with O(1) insertion and removal
The linked list has two sentinel nodes — head and tail — that act as boundaries. The least recently used node sits right after head, and the most recently used sits right before tail.
class Node {
constructor(key, value) {
this.key = key;
this.value = value;
this.prev = null;
this.next = null;
}
}
Why store the key on the node? When evicting, you need to remove the node from the linked list and delete its entry from the HashMap. The node needs to know its own key so you can do that deletion.
Building the Cache
class LRUCache {
constructor(capacity) {
this.capacity = capacity;
this.map = new Map();
this.head = new Node(0, 0);
this.tail = new Node(0, 0);
this.head.next = this.tail;
this.tail.prev = this.head;
}
_remove(node) {
node.prev.next = node.next;
node.next.prev = node.prev;
}
_addToEnd(node) {
node.prev = this.tail.prev;
node.next = this.tail;
this.tail.prev.next = node;
this.tail.prev = node;
}
get(key) {
if (!this.map.has(key)) return -1;
const node = this.map.get(key);
this._remove(node);
this._addToEnd(node);
return node.value;
}
put(key, value) {
if (this.map.has(key)) {
this._remove(this.map.get(key));
}
const node = new Node(key, value);
this._addToEnd(node);
this.map.set(key, node);
if (this.map.size > this.capacity) {
const lru = this.head.next;
this._remove(lru);
this.map.delete(lru.key);
}
}
}
Why Sentinel Nodes?
Without sentinel nodes (head and tail), every _remove and _addToEnd operation needs null checks: "Is this the first node? Is this the last node?" Sentinels eliminate all edge cases. The real data always lives between the sentinels, so you never have to worry about updating the list's head or tail pointers.
Approach 2: The Modern Way — JavaScript Map
Here is something most candidates miss: JavaScript's Map already maintains insertion order. When you iterate a Map, entries come out in the order they were inserted. This means Map can serve as both the lookup table and the ordering mechanism.
The trick: when you access an existing key, delete it and re-insert it. This moves it to the end of the insertion order. The first key in the Map is always the least recently used.
class LRUCache {
constructor(capacity) {
this.capacity = capacity;
this.map = new Map();
}
get(key) {
if (!this.map.has(key)) return -1;
const value = this.map.get(key);
this.map.delete(key);
this.map.set(key, value);
return value;
}
put(key, value) {
if (this.map.has(key)) {
this.map.delete(key);
}
this.map.set(key, value);
if (this.map.size > this.capacity) {
const lruKey = this.map.keys().next().value;
this.map.delete(lruKey);
}
}
}
That's it. The entire LRU cache in about 20 lines.
Is Map delete-and-re-insert actually O(1)?
In V8, Map is implemented as a hash table with an ordered linked list threaded through the entries (the spec requires insertion-order iteration). Both delete and set are amortized O(1). The keys().next() call to get the first key is O(1) as well — it just follows the head of the internal linked list. So yes, this approach genuinely achieves O(1) for both get and put. The V8 team optimized this path specifically because ordered iteration is a spec requirement for Map and Set.
Comparing the Two Approaches
| Aspect | Doubly Linked List + HashMap | Map-based |
|---|---|---|
| Time complexity | O(1) get/put | O(1) amortized get/put |
| Lines of code | ~50 | ~20 |
| Memory overhead | Extra prev/next pointers per node | Minimal (Map handles it internally) |
| Interview preference | Most interviewers expect this | Shows deep JS knowledge |
| Production readiness | Explicit control over internals | Relies on V8 Map implementation details |
In a real interview, implement the doubly linked list version first — it shows you understand the underlying data structures. Then mention the Map-based approach as a "did you know" bonus. Interviewers love candidates who know both approaches.
Capacity Management and Edge Cases
A production-ready LRU cache needs to handle several edge cases that interviewers love to probe:
Updating an Existing Key
When put is called with a key that already exists, you update the value and move it to the most recent position. Both implementations handle this by removing first, then re-inserting.
Capacity of 1
A cache with capacity 1 should work correctly. Every put evicts the previous entry. This is actually a great test case to verify your sentinel node logic is correct.
Capacity of 0
Depending on the problem constraints, you might need to handle this. If capacity is 0, every get returns -1 and put is a no-op.
// Test your implementation with edge cases
const cache = new LRUCache(1);
cache.put(1, "a");
cache.put(2, "b"); // Evicts key 1
cache.get(1); // -1
cache.get(2); // "b"
cache.put(3, "c"); // Evicts key 2
cache.get(2); // -1
cache.get(3); // "c"
Real-World Use Cases
LRU caches are everywhere in production systems. Here are patterns you will actually encounter:
API Response Caching
const apiCache = new LRUCache(100);
async function fetchUser(id) {
const cached = apiCache.get(id);
if (cached !== -1) return cached;
const response = await fetch(`/api/users/${id}`);
const user = await response.json();
apiCache.put(id, user);
return user;
}
Memoization with Bounded Memory
Standard memoization grows unboundedly. An LRU cache keeps memory under control:
function memoizeWithLRU(fn, capacity = 50) {
const cache = new LRUCache(capacity);
return function (...args) {
const key = JSON.stringify(args);
const cached = cache.get(key);
if (cached !== -1) return cached;
const result = fn.apply(this, args);
cache.put(key, result);
return result;
};
}
Search Suggestion Caching
const suggestionCache = new LRUCache(20);
async function getSuggestions(query) {
const cached = suggestionCache.get(query);
if (cached !== -1) return cached;
const results = await fetchSuggestions(query);
suggestionCache.put(query, results);
return results;
}
// "jav" -> cached
// "java" -> cached
// "javas" -> cached
// User clears and types "pyth" -> new fetch
// If cache is full, oldest query results get evicted
Browser Caches
The browser itself uses LRU-like strategies for its HTTP cache, DNS cache, and back-forward cache. Chrome's V8 uses similar eviction strategies for its code cache (compiled bytecode). When you clear "browsing data," you are essentially resetting these LRU caches.
The Interview Gameplan
When you get this question in an interview, here is the playbook:
- Clarify the interface — Ask about return types (null vs -1 for misses), key/value types, thread safety requirements
- State the constraint — "Both get and put need to be O(1), so I will combine a HashMap for O(1) lookup with a doubly linked list for O(1) ordering"
- Draw the sentinel pattern — Sketch
head <-> data nodes <-> tailand explain why sentinels simplify the code - Implement Node class first — Show the building block
- Implement helper methods —
_removeand_addToEndbeforegetandput - Walk through an example — Trace through 4-5 operations, showing the list state at each step
- Discuss the Map alternative — Bonus points for knowing JavaScript-specific optimizations
| What developers do | What they should do |
|---|---|
| Forgetting to update the ordering on get (treating get as read-only) LRU tracks access recency, not just insertion time. A get counts as an access. | get must move the accessed node to the most recent position |
| Not storing the key on the linked list node Without the key on the node, you cannot remove the HashMap entry in O(1) during eviction. | Each node needs both key and value so eviction can delete from the HashMap |
| Using an array and shifting elements on access Array.shift() and Array.splice() are O(n) operations. A linked list gives true O(1) node removal with pointer manipulation. | Use a doubly linked list for O(1) removal and insertion |
| Checking capacity before inserting instead of after If you evict before inserting an update to an existing key, you might evict a valid entry unnecessarily. Insert/update first, then check size. | Insert first, then check if size exceeds capacity and evict |
- 1LRU evicts the entry that has not been accessed for the longest time — both get and put count as access
- 2O(1) get and put requires combining a HashMap (fast lookup) with a doubly linked list (fast ordering)
- 3Sentinel nodes (dummy head and tail) eliminate all null-check edge cases in the linked list
- 4Each linked list node must store its key so the eviction step can delete the HashMap entry
- 5JavaScript Map maintains insertion order — delete and re-insert moves a key to the end, enabling a 20-line LRU cache