Monomorphic → Polymorphic → Megamorphic
The Property Access Problem
Think about what obj.x actually asks of a JavaScript engine. In C, accessing a struct field is a single memory offset read. Done. But JavaScript objects are dynamic — properties can be added, removed, or changed at any time. x might be on the object, on the prototype, it might be a getter, it might not exist at all.
Without optimization, every property access would require: look up the hidden class, find x in the descriptor array, determine the offset, read the value. That's multiple pointer chases for every single .x. Do that in a tight loop and you're toast.
V8 solves this with inline caches (ICs) — and honestly, this is the most important optimization technique in any JavaScript engine. Understanding ICs is how you go from "my code is slow and I don't know why" to "I know exactly why."
Inline Caches: Remembering the Fast Path
The idea is beautifully simple. An inline cache is a per-operation memory cell that remembers the result of a previous lookup. The first time V8 executes obj.x, it performs the full lookup, finds that obj has Map M2 and x is at offset 0. It then writes this into the IC: "if the Map is M2, read offset 0."
On subsequent executions, V8 checks: does the object's Map match M2? If yes, skip the lookup entirely — just read offset 0. One comparison, one memory read. Nearly as fast as a C struct access.
function getX(obj) {
return obj.x; // IC site: remembers shape → offset mapping
}
const point = { x: 1, y: 2 }; // Map M2
getX(point); // IC miss → full lookup → cache {M2: offset 0}
getX(point); // IC hit → direct read at offset 0
getX(point); // IC hit → direct read at offset 0
Think of an inline cache as a sticky note on your desk. The first time your coworker asks where the stapler is, you check every drawer. Then you write a sticky note: "Stapler → top drawer." Next time someone asks, you glance at the note and answer instantly. But if every person asking about a "stapler" keeps it in a different drawer, your sticky note system breaks down — eventually you just have to check every time.
The Three IC States
Here's where you need to pay attention, because this is the mental model that explains most JavaScript performance mysteries. Every inline cache site transitions through three states based on how many different object shapes it encounters:
Monomorphic (1 shape) — The Fast Path
The IC has seen exactly one hidden class. V8 generates a single type guard:
if (obj.Map === M2) return obj[offset 0] // ~1-2 CPU cycles
else → slow path (IC miss)
This is the fastest possible property access in JavaScript. The type guard is a single pointer comparison, and the property read is a direct memory load at a known offset. In TurboFan-optimized code, the guard is a single cmp instruction.
function sumX(arr) {
let total = 0;
for (const obj of arr) {
total += obj.x; // If every object has the same Map → monomorphic
}
return total;
}
// All objects share one shape → obj.x stays monomorphic
const points = Array.from({ length: 10000 }, () => ({ x: 1, y: 2 }));
sumX(points); // Blazing fast
Polymorphic (2-4 shapes) — The Acceptable Path
The IC has seen 2-4 different hidden classes. V8 generates a linear search through cached entries:
if (obj.Map === M2) return obj[offset 0]
if (obj.Map === M5) return obj[offset 1]
if (obj.Map === M8) return obj[offset 0]
else → slow path
Each additional shape adds another comparison. With 4 shapes, that's up to 4 comparisons before reaching the property — roughly 4-8x slower than monomorphic.
function sumX(arr) {
let total = 0;
for (const obj of arr) {
total += obj.x; // Sees 3 different Maps → polymorphic
}
return total;
}
// Three different constructors → three different Maps
const mixed = [
{ x: 1, y: 2 }, // Map A
{ x: 1, y: 2, z: 3 }, // Map B (extra property)
{ x: 1, name: 'p' }, // Map C (different properties)
];
Megamorphic (5+ shapes) — The Slow Path
And this is where things fall off a cliff. Once an IC has seen more than 4 different hidden classes (the exact threshold varies), V8 gives up on caching individual shapes and falls back to a global hash-based lookup called the megamorphic stub.
// Megamorphic: no per-shape caching
lookup(obj.Map, "x") → hash table search → offset → read value
This is dramatically slower:
- Monomorphic: ~1-2 ns per access
- Polymorphic (4 shapes): ~4-8 ns per access
- Megamorphic: ~20-50 ns per access
The megamorphic state is sticky — once an IC goes megamorphic, it stays megamorphic. V8 doesn't try to recover. Yeah, really. There's no going back.
function readName(obj) {
return obj.name; // Called with 20 different object shapes → megamorphic
}
// API responses with varying optional fields
readName({ name: 'A' });
readName({ name: 'B', email: '...' });
readName({ name: 'C', age: 30 });
readName({ name: 'D', email: '...', age: 30 });
readName({ name: 'E', role: 'admin' });
// ... IC is now megamorphic. Every call pays the hash lookup cost.
Megamorphic ICs don't just slow down one property access — they prevent TurboFan from optimizing the entire function effectively. TurboFan relies on IC feedback to specialize code. When an IC is megamorphic, TurboFan can't specialize — it generates generic, slow code for that operation and everything that depends on its result.
Measuring IC State
You can observe IC transitions using V8's --trace-ic flag:
node --trace-ic your-script.js 2>&1 | grep "LoadIC"
Output shows transitions:
[LoadIC at offset 42]: . -> 1 (monomorphic)
[LoadIC at offset 42]: 1 -> P (polymorphic)
[LoadIC at offset 42]: P -> N (megamorphic)
The transitions are one-directional: monomorphic → polymorphic → megamorphic. V8 never transitions backward. Once megamorphic, always megamorphic for that IC site.
Practical Patterns: Staying Monomorphic
Now that you understand the problem, let's talk about how to stay on the fast path.
Pattern 1: Consistent Object Shapes
The simplest rule — and honestly the one that gives you the most bang for your buck: every object passed to a function should have the same hidden class.
// BAD: 2^3 = 8 possible shapes
function createUser(data) {
const user = { id: data.id };
if (data.name) user.name = data.name;
if (data.email) user.email = data.email;
if (data.avatar) user.avatar = data.avatar;
return user;
}
// GOOD: exactly 1 shape
function createUser(data) {
return {
id: data.id,
name: data.name ?? null,
email: data.email ?? null,
avatar: data.avatar ?? null,
};
}
Pattern 2: Separate Hot Paths by Shape
If you must handle different shapes, use separate functions so each gets its own monomorphic IC:
// BAD: one function, multiple shapes → polymorphic/megamorphic
function getArea(shape) {
return shape.width * shape.height; // .width IC sees Circle, Rect, Triangle...
}
// GOOD: separate functions, each monomorphic
function getRectArea(rect) {
return rect.width * rect.height; // Only sees Rect → monomorphic
}
function getCircleArea(circle) {
return Math.PI * circle.radius ** 2; // Only sees Circle → monomorphic
}
Pattern 3: Avoid Prototype Pollution
Objects from different constructors always have different Maps, even if their properties are identical:
class Point2D { constructor(x, y) { this.x = x; this.y = y; } }
class Vector2D { constructor(x, y) { this.x = x; this.y = y; } }
function sumX(obj) { return obj.x; }
sumX(new Point2D(1, 2)); // Map A (Point2D prototype chain)
sumX(new Vector2D(1, 2)); // Map B (Vector2D prototype chain)
// sumX is now polymorphic — different prototype chains mean different Maps
IC State in TurboFan Optimization
This is where IC state stops being a micro-optimization concern and becomes an architecture concern. TurboFan reads IC feedback to generate specialized code, and the IC state directly determines code quality:
| IC State | TurboFan Output | Performance |
|---|---|---|
| Monomorphic | Single type guard + direct offset load | Peak — equivalent to static language |
| Polymorphic | Multi-way type guard + per-shape offset loads | Good — linear overhead per shape |
| Megamorphic | Generic property lookup call | Poor — hash table on every access |
| Uninitialized | Generic code (no specialization possible) | Poor — no feedback to optimize with |
When TurboFan encounters a megamorphic IC, it emits a call to V8's generic property lookup runtime function — essentially the same slow path as the interpreter. The function can't be fully optimized because V8 doesn't know what shapes to expect.
The .value property access IC is megamorphic — 15 shapes far exceed the polymorphic threshold of ~4. Every access pays the full hash-lookup cost instead of a direct offset read.
Fix options:
- Normalize shapes upstream: ensure all objects entering the pipeline have identical hidden classes (same properties, same order, use
nullfor optional fields) - Split the pipeline: route different object types to different processing functions so each stays monomorphic
- Extract into plain objects:
const item = { value: raw.value }creates a uniform shape regardless of the source object's shape
The normalized approach typically yields 5-15x throughput improvement for tight loops.
Key Rules
- 1Inline caches (ICs) remember
{hidden class → property offset}mappings. They make property access fast by avoiding full lookups on repeated shapes. - 2Monomorphic = 1 shape = fastest. Single pointer comparison + direct memory read.
- 3Polymorphic = 2-4 shapes = linear search through cached entries. Acceptable but measurably slower.
- 4Megamorphic = 5+ shapes = hash table fallback. Dramatically slower and prevents TurboFan optimization.
- 5IC transitions are one-way: mono → poly → mega. Once megamorphic, no recovery.
- 6Keep objects monomorphic: consistent shapes, always initialize all properties, use null for absent optional fields.
- 7Different constructors (classes) produce different Maps even with identical properties — prototype chains differ.
- 8Separate hot paths by shape: use different functions for different object types to give each its own monomorphic IC.