Skip to content

Inline Caches: Monomorphic, Polymorphic, Megamorphic

advanced8 min read

The Lookup That Should Be Slow but Isn't

Think about this for a second. JavaScript is dynamically typed. When you write obj.x, the engine has no idea at compile time where x lives in memory. It could be an own property, a prototype property, a getter, a Proxy trap, or missing entirely. A naive implementation would search for x on every single access.

And yet -- V8 accesses obj.x in a tight loop at essentially the same speed as accessing a C struct field. How?

function getX(obj) { return obj.x; }

const point = { x: 10, y: 20 };

// After a few calls, this runs as fast as a C struct field access
for (let i = 0; i < 1000000; i++) {
  getX(point);
}

The answer is inline caches (ICs). They're the most important performance mechanism in V8, and they exist at every property access, function call, and operator in your code.

What Is an Inline Cache, Really?

Mental Model

Imagine you work at a library's front desk. The first time someone asks for "Introduction to Algorithms," you search the entire catalog, walk to aisle 7, shelf 3, and find it. That took 5 minutes. You make a note: "Introduction to Algorithms -> Aisle 7, Shelf 3." The next time someone asks for the same book, you skip the search and go directly to the shelf. That's an inline cache — a sticky note at the access site that remembers where to find things.

Now imagine people start asking for different books. Two or three different titles? You keep a short list of sticky notes and check each one. That's polymorphic. But if 50 different people ask for 50 different books, your sticky note system breaks down and you go back to the full catalog search. That's megamorphic.

So how does this actually work in V8? At the bytecode level, every property access like obj.x has an associated IC slot. The IC records what hidden class (Map) it has seen and where the property was found:

// Bytecode for: return obj.x;
LdaNamedProperty obj, "x", [IC_slot_0]

That [IC_slot_0] is the inline cache. It starts empty (uninitialized) and evolves through states based on what objects it encounters.

The Four IC States

Execution Trace
Uninitialized
IC has never been hit
First access does a full property lookup
Monomorphic
IC has seen exactly 1 hidden class
Single Map check + direct offset read. Fastest state.
Polymorphic
IC has seen 2-4 hidden classes
Linear search through a short list of Map/offset pairs
Megamorphic
IC has seen 5+ hidden classes
Falls back to generic hash-table lookup. Slowest state.

The performance difference between these states is dramatic -- we're talking orders of magnitude:

IC StateLookup SpeedWhat Happens
Monomorphic~1 nsCheck Map pointer, read at known offset. One comparison, one memory read.
Polymorphic (2-4)~5-15 nsLinear scan through 2-4 Map/offset pairs. Still fast, but 5-15x slower.
Megamorphic (5+)~50-100 nsGeneric lookup via Map's descriptor array or hash table. 50-100x slower.

In a loop running 10 million iterations, the difference between monomorphic and megamorphic is the difference between 10ms and 1000ms.

Monomorphic: The Fast Path

This is where you want to be. When an IC sees only one hidden class, V8 generates a fast path that looks like:

// Pseudocode for monomorphic IC at getX(obj):
if (obj.map === CACHED_MAP) {
  return obj[CACHED_OFFSET];  // Direct memory read at known offset
}
// Slow path: full lookup

This is a single pointer comparison followed by an offset read. On modern CPUs, this compiles to roughly 2-3 machine instructions.

class Point {
  constructor(x, y) { this.x = x; this.y = y; }
}

function sumX(points) {
  let total = 0;
  for (const p of points) {
    total += p.x; // IC sees one Map (Point), stays monomorphic
  }
  return total;
}

// All Points have the same Map — monomorphic, maximum speed
const points = Array.from({length: 100000}, (_, i) => new Point(i, 0));
sumX(points);

Polymorphic: The Middle Ground

When an IC encounters 2-4 different hidden classes, it becomes polymorphic. V8 builds a small lookup table:

// Pseudocode for polymorphic IC (2 shapes):
if (obj.map === MAP_A) return obj[OFFSET_A];
if (obj.map === MAP_B) return obj[OFFSET_B];
// Slow path

This is still fast — a few comparisons — but it's measurably slower than monomorphic in hot loops.

function getX(obj) { return obj.x; }

const point2D = { x: 1, y: 2 };
const point3D = { x: 1, y: 2, z: 3 };

// IC at getX becomes polymorphic (2 shapes)
getX(point2D);
getX(point3D);

Megamorphic: The Slow Path

And this is where things go off a cliff. Once an IC has seen 5 or more different hidden classes, V8 basically gives up. It transitions to megamorphic and falls back to a generic lookup that searches the object's Map descriptor chain.

function getX(obj) { return obj.x; }

// Five different shapes = megamorphic
getX({ x: 1 });
getX({ x: 1, a: 2 });
getX({ x: 1, b: 2 });
getX({ x: 1, c: 2 });
getX({ x: 1, d: 2 });

// This IC is now permanently megamorphic
// Even passing { x: 1 } again goes through the slow path
Common Trap

Once an IC goes megamorphic, it stays megamorphic for the lifetime of that function's compiled code. It doesn't "reset" if you start passing consistent shapes again. The only way to get back to monomorphic is if V8 deoptimizes and recompiles the function — which it may or may not do.

The Real-World Impact: Let the Numbers Speak

// Setup: 100,000 objects with identical shape
class UniformPoint { constructor(x) { this.x = x; } }
const uniform = Array.from({length: 100000}, (_, i) => new UniformPoint(i));

// Setup: 100,000 objects with 10 different shapes
const mixed = Array.from({length: 100000}, (_, i) => {
  const obj = { x: i };
  // Add a unique property based on i % 10 to create 10 different shapes
  obj['prop' + (i % 10)] = true;
  return obj;
});

function sumX(arr) {
  let sum = 0;
  for (let i = 0; i < arr.length; i++) sum += arr[i].x;
  return sum;
}

// uniform: ~0.3ms (monomorphic IC)
// mixed:   ~5ms   (megamorphic IC, ~16x slower)

Same number of objects, same .x access, 16x performance difference. Let that sink in. This is entirely caused by IC states.

Production Scenario: The Event Handler Slowdown

This one's a classic. A frontend app has an event normalization layer:

function handleEvent(event) {
  const x = event.clientX;  // IC for clientX
  const y = event.clientY;  // IC for clientY
  updateCursor(x, y);
}

// Called with MouseEvent, PointerEvent, TouchEvent (wrapped),
// custom synthetic events from drag library, test mock events...

Five+ different event shapes hit the same event.clientX access site. The ICs go megamorphic. On a 60fps animation driven by pointer events, this adds 2-3ms per frame — enough to cause visible jank.

The fix: normalize the shape at the boundary:

function normalizeEvent(event) {
  // Create a consistent shape before passing downstream
  return { clientX: event.clientX, clientY: event.clientY, type: event.type };
}

function handleEvent(event) {
  const normalized = normalizeEvent(event);
  // IC here is monomorphic — all normalized objects have the same shape
  updateCursor(normalized.clientX, normalized.clientY);
}

The trick is elegant: normalizeEvent absorbs the megamorphic cost (it's called less frequently), while the hot handleEvent path stays monomorphic.

How TurboFan uses IC feedback for optimization

When TurboFan optimizes a function, it reads the IC state for every property access:

  • Monomorphic IC: TurboFan emits a single Map check guard + direct offset load. If the guard fails (wrong Map), it deoptimizes.
  • Polymorphic IC: TurboFan emits a cascading check for each known Map. More code, more branches, but still avoids generic lookup.
  • Megamorphic IC: TurboFan cannot specialize. It emits a generic property lookup call, which is essentially the same speed as the interpreter.

This is why monomorphic ICs are so critical for peak performance — they're the only state where TurboFan can generate truly optimal code.

Common Mistakes

What developers doWhat they should do
Passing objects with different shapes through the same function
Each unique shape pushes the IC toward polymorphic/megamorphic, degrading performance for all callers
Normalize object shapes at boundaries or use separate functions for different types
Assuming polymorphic is 'fast enough' without measuring
Each additional shape adds a branch to the IC check. In micro-second-scale loops, this multiplies
In hot loops (100K+ iterations), even polymorphic (2-4 shapes) is measurably slower than monomorphic
Using a generic utility function for objects of many different shapes
A single function called with 10 different object shapes means megamorphic ICs everywhere
Split hot-path utilities by shape, or normalize input shapes before the hot path
Not considering IC state when mixing different class hierarchies in arrays
Iterating over an array with mixed shapes forces every property access IC into polymorphic or worse
Keep arrays homogeneous — all elements should have the same hidden class

Quiz: IC State Identification

Quiz
What is the IC state for the property access obj.name after this code runs?
Quiz
You have a hot loop processing an array of 1M elements. The loop body accesses element.value. The array contains objects from 3 different classes. What's the best optimization?

Key Rules

Key Rules
  1. 1Every property access in your code has an inline cache that tracks the hidden classes of objects passed through it.
  2. 2Monomorphic (1 shape) is 50-100x faster than megamorphic (5+ shapes). Keep hot code paths monomorphic.
  3. 3Megamorphic is permanent for that compilation — once an IC sees 5+ shapes, it doesn't recover without recompilation.
  4. 4Normalize object shapes at boundaries: convert diverse input shapes into a single consistent shape before hot paths.
  5. 5Arrays should be homogeneous. Mixing object shapes in arrays forces every access in every loop to be polymorphic or worse.
  6. 6This is the #1 JavaScript performance optimization: consistent object shapes through your hot paths.