Stack, Heap, and Memory Layout
Your Code Lives in Two Places
Every variable you create, every object you build, every function you call — it all lives somewhere in memory. Not in some abstract cloud. In actual bytes, at actual addresses, managed by actual allocation strategies.
JavaScript hides this from you. There is no malloc, no pointer arithmetic, no manual deallocation. But the engine still has to make the same decisions that C programmers make: where does this data go? How much space does it need? When can I reclaim it?
The answer depends on two memory regions with fundamentally different properties: the stack and the heap.
If you don't understand the difference, you won't understand why closures have a memory cost, why deep recursion crashes your tab, or why some code allocates ten times more garbage than equivalent code that looks nearly identical.
The Stack: Fast, Rigid, Automatic
Think of the stack like a spring-loaded plate dispenser in a cafeteria. You can only add plates to the top, and you can only remove the top plate. This Last-In-First-Out discipline makes it blazingly fast — the CPU just moves a pointer up or down. No searching, no fragmentation, no garbage collection.
The call stack stores execution contexts — one frame per function call. Each frame contains:
- Local primitives — numbers, booleans,
undefined,null, and small strings (SMI and HeapNumber in V8) - References to heap objects — the pointer lives on the stack, the object lives on the heap
- The return address — where to continue execution when this function finishes
- Arguments — the values passed into the function
function multiply(a, b) {
const result = a * b; // 'result' is a number — lives on stack
return result;
}
function calculate() {
const x = 10; // stack: x = 10
const y = 20; // stack: y = 20
const answer = multiply(x, y); // new frame pushed
return answer;
}
calculate();
When calculate calls multiply, a new stack frame is pushed. When multiply returns, its frame is popped — instantly. No GC needed, no deallocation logic. The stack pointer just moves back.
Stack Overflow Is Literally Running Out of Stack
The stack has a fixed size — typically around 1MB in V8. Blow past it and you get the famous RangeError: Maximum call stack size exceeded:
function recurse() {
return recurse(); // each call adds a frame, nothing ever pops
}
recurse(); // RangeError after ~10,000-15,000 frames
This is not a bug in your logic (well, it is). It is a physical memory limit. Each frame takes space, and the stack cannot grow.
Why doesn't the stack just grow dynamically?
It could — some languages do allow it. But fixed-size stacks give the CPU enormous advantages: stack pointer arithmetic is a single instruction, stack frames have predictable layouts, and cache locality is excellent because frames are allocated contiguously. Dynamic stacks would require bounds checking on every frame push, reallocating and copying on overflow, and potentially fragmenting memory. The performance trade-off isn't worth it for the rare case of deep recursion. Tail call optimization (TCO) is the proper solution, but V8 removed their TCO implementation due to complications with debugging and stack traces.
The Heap: Flexible, Complex, Garbage-Collected
Everything that isn't a primitive or a stack frame goes on the heap: objects, arrays, functions, closures, strings longer than a certain threshold, Maps, Sets, Promises, class instances — all of it.
function createUser(name) {
// This object is allocated on the heap
// The variable 'user' on the stack holds a pointer to it
const user = {
name: name,
scores: [100, 95, 87], // the array is also on the heap
greet() { // the function is also on the heap
return `Hi, I'm ${this.name}`;
}
};
return user;
}
const alice = createUser('Alice');
// createUser's stack frame is gone
// but the object persists on the heap because 'alice' still references it
Unlike the stack, heap allocation is dynamic — objects can be any size and live for any duration. But this flexibility comes at a cost:
- Allocation is slower — the allocator must find a free region of the right size
- Deallocation is complex — the garbage collector must determine when objects are unreachable
- Fragmentation — allocated and freed regions intermix, leaving gaps
- No predictable layout — cache locality suffers compared to the contiguous stack
V8 doesn't use a general-purpose malloc for JavaScript objects. New objects are allocated in the Young Generation using a simple bump allocator — just increment a pointer. This is nearly as fast as stack allocation. The cost comes later, during garbage collection. We cover this in detail in the Generational GC topic.
Why Closures Force Heap Allocation
This is the part most developers miss. When a function closes over a variable, that variable cannot live on the stack — because the stack frame where it was declared will be destroyed when the function returns.
function createCounter() {
let count = 0; // normally this would be stack-only
return function increment() {
count++; // but increment() outlives createCounter()
return count; // so 'count' must survive frame destruction
};
}
const counter = createCounter();
// createCounter's stack frame is gone
// but counter() still needs access to 'count'
counter(); // 1
counter(); // 2
V8 handles this with a Context object — a heap-allocated structure that holds the closed-over variables. The inner function gets a hidden reference to this Context, not to the stack frame.
This is why closures have a memory cost: they force variables from stack to heap. Not all variables in the enclosing scope — V8 is smart enough to only capture the ones actually referenced. But if a closure references even one variable, V8 creates a Context object.
Closures can accidentally retain much more memory than the variables they use. If a function closes over a large scope and V8 cannot split the Context, the entire Context — including variables the closure doesn't reference — may stay alive. Avoid declaring large temporary variables in the same scope as long-lived closures.
V8's Actual Memory Layout
At a high level, V8 divides its heap into several spaces:
| Space | Purpose | GC Strategy |
|---|---|---|
| New Space (Young Generation) | Newly allocated objects | Scavenger (semi-space copying) |
| Old Space (Old Generation) | Objects that survived 2+ GC cycles | Mark-Sweep-Compact |
| Large Object Space | Objects > 256KB | Mark-Sweep (never moved) |
| Code Space | Compiled machine code (JIT) | Special handling |
| Map Space | Hidden class (Map) metadata | Mark-Sweep-Compact |
Most objects you create land in New Space first. If they survive garbage collection, they get promoted to Old Space. Large objects skip New Space entirely.
How V8 decides between stack and heap allocation
V8 performs escape analysis during compilation. If the compiler can prove that an object never escapes the function where it was created — meaning no reference to it is stored anywhere that outlives the function — it can allocate the object on the stack or even decompose it into individual stack slots (scalar replacement). In practice, escape analysis in V8 is conservative: if there's any doubt, the object goes to the heap. This is why patterns like returning objects from functions almost always result in heap allocation, even when the caller immediately destructures the result.
Stack vs Heap at a Glance
| What developers do | What they should do |
|---|---|
| Assuming all variables live on the stack let obj = {} allocates the object on the heap; the stack holds the pointer | Only primitives and references live on the stack — objects always go to the heap |
| Thinking closures are free The captured variables must outlive the stack frame | Closures force heap allocation of captured variables via Context objects |
| Using deep recursion instead of iteration The call stack has a fixed size (~1MB); an array-based stack can grow to heap limits | Convert deep recursion to iteration with an explicit stack (array) |
| Declaring large temporary arrays in the same scope as closures The closure's Context may retain references to all variables in its scope | Isolate large temporaries in a separate inner scope or function |
Production Example: Closure Leak in Event Handlers
// BAD: This leaks because the closure captures 'hugeData'
function setupHandler(element) {
const hugeData = new Array(1_000_000).fill('x'); // ~8MB
const processedResult = hugeData.map(x => x.toUpperCase());
element.addEventListener('click', () => {
// Only needs processedResult, but hugeData may also be retained
// because both live in the same Context
console.log(processedResult.length);
});
}
// GOOD: Isolate the heavy computation
function setupHandler(element) {
const processedResult = computeResult();
element.addEventListener('click', () => {
// Only processedResult is in scope — hugeData was never here
console.log(processedResult.length);
});
}
function computeResult() {
const hugeData = new Array(1_000_000).fill('x');
return hugeData.map(x => x.toUpperCase());
// hugeData is now unreachable — GC can reclaim it
}
The fix is structural: move the temporary into a function that returns only what you need. The heavy data becomes unreachable as soon as that helper function returns.
- 1The stack is fast and automatic — LIFO, fixed size, no GC needed. It stores primitives, references, and call frames.
- 2The heap is flexible and GC-managed — dynamic size, variable lifetimes. It stores all objects, arrays, functions, and closures.
- 3Closures force captured variables from stack to heap via Context objects. This is their hidden memory cost.
- 4V8 uses escape analysis to decide allocation strategy, but it's conservative — when in doubt, objects go to the heap.
- 5Never rely on deep recursion. Convert to iteration with an explicit stack data structure.
- 6Isolate heavy temporaries from closure scopes to prevent accidental retention.