Deoptimization & Stale Closures
The Cost of Being Wrong
TurboFan generates fast machine code by making bets on your code's behavior. It looks at feedback vectors and says: "this function always receives integers, so I'll emit integer-specialized machine code." The code is fast precisely because it skips all the generic checks. It's gambling on stability.
But JavaScript is dynamic. The 10,001st call might pass a string. When that happens, the specialized machine code can't handle it. V8 must deoptimize: discard the optimized code, reconstruct the interpreter frame, and resume execution in Ignition. And this is where it gets painful.
Deoptimization isn't just "going back to slow" — it's actively expensive:
- The optimized stack frame must be translated back to an Ignition-compatible frame
- All register values must be materialized into the correct bytecode register layout
- The function restarts from the deopt point in the interpreter
- V8 may blacklist the function from future optimization if it deopts repeatedly
What Triggers Deoptimization
Let's walk through the things that make V8 throw away its hard work.
Type Changes (The Most Common Trigger)
This is the one that gets people. TurboFan specializes code for observed types. Violating those types triggers a deopt:
function add(a, b) {
return a + b;
}
// 10,000 calls with integers → TurboFan emits integer Add
for (let i = 0; i < 10000; i++) add(i, i);
// Now pass a string → type guard fails → DEOPT
add("hello", "world");
The generated machine code has a guard: if typeof(a) !== SMI → deopt. When "hello" arrives, the guard fires.
Hidden Class Mismatches
Property access optimized for one Map encountering a different Map:
function getX(point) {
return point.x; // Optimized for Map M2
}
const p1 = { x: 1, y: 2 }; // Map M2
const p2 = { x: 1, y: 2, z: 3 }; // Map M5 (different!)
for (let i = 0; i < 10000; i++) getX(p1); // TurboFan optimizes for M2
getX(p2); // Map guard fails → DEOPT
The arguments Object
Using the arguments object in certain ways prevents optimization entirely or causes deopt:
function leaky() {
// Passing 'arguments' to another function prevents optimization
return Array.prototype.slice.call(arguments);
}
function better(...args) {
// Rest parameters are optimizable — TurboFan understands them
return args.slice();
}
The arguments object is an "exotic object" in the spec with special semantics (it aliases named parameters in sloppy mode). V8 can sometimes optimize simple uses like arguments[0] and arguments.length, but passing arguments to another function, leaking it into a closure, or calling arguments[Symbol.iterator] forces V8 to materialize the full arguments object — and often triggers deoptimization.
Other Deoptimization Triggers
- eval() or with statement: Makes scope chain unpredictable — V8 can't inline or specialize
- for-in on objects with prototype changes: Enumeration cache invalidated
- Changing object's prototype after construction:
Object.setPrototypeOf()or writing__proto__invalidates all Maps in the prototype chain - try-catch in hot code (historical — modern V8 handles this better, but
catchblocks are still not optimized as aggressively) - Generator/async function yield points: State machine transitions limit optimization scope
Eager vs Lazy Deoptimization
Not all deopts are created equal. V8 has two strategies, and understanding the difference matters for debugging:
Eager Deoptimization
Triggered immediately when a type guard fails at the current instruction. The deopt happens right where the violation occurs.
function compute(x) {
return x * 2; // Type guard: is x a SMI?
}
// If x is suddenly a string, eager deopt happens at the multiply
Eager deopt is straightforward: the guard fires, V8 reconstructs the interpreter frame at that exact bytecode position, and resumes interpretation.
Lazy Deoptimization
Triggered when something changes in the environment that invalidates optimized code that's not currently running. The deopt is deferred until the function is next entered.
const config = { multiplier: 2 };
function compute(x) {
return x * config.multiplier; // Inlined constant: 2
}
// TurboFan inlines config.multiplier as the constant 2
for (let i = 0; i < 50000; i++) compute(i);
// Later: someone changes the config
config.multiplier = 3;
// V8 marks compute's optimized code as invalid (lazy deopt)
// Next call to compute() triggers the actual deoptimization
How Lazy Deopt Works Internally
V8 maintains dependency links between optimized code and the objects/Maps it depends on. When config.multiplier is inlined as a constant, V8 creates a dependency: "this optimized code depends on config retaining Map M and property multiplier having value 2."
When config.multiplier = 3 executes, V8 walks the dependency list and marks all dependent optimized code as invalid. The code isn't immediately decompiled — that would be expensive. Instead, the next time the function's entry is reached, V8 checks the "marked for deopt" flag and triggers the transition back to interpreted code.
This is why lazy deopt exists: eagerly decompiling all dependent functions whenever any property changes would be catastrophically expensive. Deferring the deopt to the next call amortizes the cost.
Deopt Loops: The Performance Cliff
This is the part that burns people in production. The worst case is a deopt loop: V8 optimizes a function, deopts it, re-optimizes it (with slightly different assumptions), deopts again, and repeats. Each cycle wastes compilation time and execution time.
function polymorphicAdd(a, b) {
return a + b;
}
// Phase 1: integers → optimize for SMI
for (let i = 0; i < 10000; i++) polymorphicAdd(i, i);
// Phase 2: strings → deopt, re-optimize for strings
for (let i = 0; i < 10000; i++) polymorphicAdd(`a${i}`, `b${i}`);
// Phase 3: back to integers → deopt again, re-optimize
for (let i = 0; i < 10000; i++) polymorphicAdd(i, i);
V8 has a deopt counter per function. After too many deopts (typically 5-10), V8 marks the function as "don't optimize" — it stays in Ignition/Sparkplug permanently. This is the performance cliff: the function is permanently relegated to interpreted execution.
Stale Closures: When Captured Assumptions Go Wrong
Now let's talk about a subtler problem. Closures capture variables from their enclosing scope. When TurboFan optimizes a closure, it may embed assumptions about the captured values. If those values change, the closure's optimized code becomes stale.
function createMultiplier(factor) {
// TurboFan may inline 'factor' as a constant if it's always the same value
return function multiply(x) {
return x * factor;
};
}
const double = createMultiplier(2);
// TurboFan optimizes multiply with factor=2 inlined
// Later: this creates a closure with factor=3
const triple = createMultiplier(3);
// If triple shares the same optimized code assumption, it could produce wrong results
// V8 prevents this through proper closure handling — but the reoptimization has a cost
V8 handles this correctly — it won't produce wrong results. But the interaction between closures and optimization is subtle:
Closure Over Mutable Variables
function createCounter() {
let count = 0;
return {
increment() { count++; },
getCount() { return count; }
};
}
TurboFan cannot inline count as a constant because increment() mutates it. The closure variable lives in a Context object on the heap. Every access to count requires a context load — an extra pointer dereference. This is correct but slower than a local variable.
Closure Capturing Large Scopes
function processData(hugeArray) {
const processed = hugeArray.map(transform);
const summary = computeSummary(processed);
// This closure captures the entire scope — including hugeArray
return function getSummary() {
return summary;
};
}
The returned closure only uses summary, but JavaScript closures capture the entire lexical scope — hugeArray, processed, and summary are all kept alive. V8 performs scope analysis and can sometimes eliminate unused captures in optimized code, but the garbage collector still sees the full scope chain. The hugeArray won't be collected until getSummary is unreachable.
Fix: extract only what the closure needs:
function processData(hugeArray) {
const processed = hugeArray.map(transform);
const summary = computeSummary(processed);
return createGetter(summary); // hugeArray and processed are now GC-eligible
}
function createGetter(value) {
return function getSummary() { return value; };
}Observing Deoptimization with --trace-deopt
V8 provides the --trace-deopt flag to see exactly when and why deoptimization happens:
node --trace-deopt your-script.js
Output:
[deoptimizing (DEOPT eager): begin ... reason: wrong map]
;;; deoptimize at: <source position>
;;; input: v8::internal::Map 0x...
;;; expected: v8::internal::Map 0x...
Key fields:
- DEOPT eager/lazy: whether the deopt is immediate or deferred
- reason: what assumption failed (
wrong map,wrong type,not a SMI,division by zero, etc.) - source position: where in your code the deopt was triggered
For more detail, combine with --trace-opt:
node --trace-opt --trace-deopt script.js 2>&1 | grep -E "(optimizing|deoptimizing)"
This shows the optimization/deoptimization cycle for each function.
Preventing Deoptimization
The good news? Most deoptimizations are completely avoidable once you know what to watch for.
1. Consistent Types
The single most important rule — and honestly, if you only take one thing from this entire chapter, make it this one. Functions should always receive the same types:
// BAD: type changes over time
function process(value) { return value * 2; }
process(42); // SMI
process(3.14); // HeapNumber → deopt
process("hello"); // String → deopt again
// GOOD: stable types
function processInt(n: number) { return n * 2; } // Always numbers
function processStr(s: string) { return s.repeat(2); } // Always strings
2. Avoid Late Property Addition
Adding properties after construction creates shape changes:
// BAD: shape changes after optimization
const obj = { x: 1 };
// ... 10,000 uses of obj.x optimized for this shape
obj.z = 3; // Shape change → invalidates all dependent optimized code
3. Don't Change Prototypes
Object.setPrototypeOf() is a nuclear option that invalidates all dependent optimized code:
// NEVER do this in performance-critical code
Object.setPrototypeOf(myObj, newProto); // Global deopt cascade
4. Avoid Polymorphic Functions
Each function should handle one type of input. Dispatch at the call site, not inside the hot function.
Key Rules
- 1Deoptimization discards optimized machine code and falls back to the interpreter. It's expensive: frame translation, register materialization, re-execution overhead.
- 2Type changes are the most common trigger. A function optimized for integers that receives a string will deopt.
- 3Eager deopt fires immediately at a failed type guard. Lazy deopt marks code as invalid and triggers on next entry.
- 4Deopt loops (optimize → deopt → re-optimize → deopt) lead to permanent de-optimization — V8 gives up after ~5-10 deopts.
- 5Closures capture the entire lexical scope. Unused variables in the parent scope are still retained until the closure dies.
- 6Captured variables live in heap-allocated Context objects, making them slower to access than stack-local variables.
- 7Use --trace-deopt and --trace-opt to observe the optimization lifecycle and identify deopt triggers.
- 8Prevention: consistent types, no late property addition, no prototype changes, no polymorphic hot functions.