Skip to content

Writing V8-Friendly JavaScript

intermediate11 min read

From Theory to Practice

Alright, you've learned the theory -- hidden classes, inline caches, element kinds, TurboFan's speculative optimization. Now let's turn all of that into patterns you can actually use. Every pattern here includes a concrete before/after with the reasoning.

And here's the thing: this is not about premature optimization. These patterns are free -- they don't make your code harder to read or maintain. They're simply the idiomatic way to write JavaScript that V8 handles well.

Pattern 1: Consistent Object Shapes

Mental Model

Think of V8 as a filing clerk who creates a new filing system for every unique combination of folders it encounters. Hand the clerk 100 folders with the same labels in the same order, and it creates one filing system and processes them instantly. Hand it 100 folders with randomly different labels, and it creates 100 separate systems and drowns in overhead. Consistent shapes = one system = fast.

Always Initialize All Properties

// Before: conditional properties create multiple hidden classes
function createUser(data) {
  const user = { id: data.id, name: data.name };
  if (data.email) user.email = data.email;
  if (data.avatar) user.avatar = data.avatar;
  return user;
}
// Creates up to 4 different shapes: {id,name}, {id,name,email},
// {id,name,avatar}, {id,name,email,avatar}

// After: always initialize all properties
function createUser(data) {
  return {
    id: data.id,
    name: data.name,
    email: data.email ?? null,
    avatar: data.avatar ?? null,
  };
}
// Always creates one shape: {id,name,email,avatar}

Impact: Functions processing arrays of these users go from polymorphic (4 shapes) to monomorphic (1 shape) inline caches. Measured improvement: 3-7x in tight loops.

Use Classes for Repeated Object Creation

// Before: object literals with methods — each gets a new function instance
const points = [];
for (let i = 0; i < 100000; i++) {
  points.push({
    x: i, y: i,
    distanceTo(other) {
      return Math.sqrt((this.x - other.x) ** 2 + (this.y - other.y) ** 2);
    }
  });
}
// 100,000 separate function objects for distanceTo

// After: class with prototype method — one function shared by all
class Point {
  constructor(x, y) {
    this.x = x;
    this.y = y;
  }
  distanceTo(other) {
    return Math.sqrt((this.x - other.x) ** 2 + (this.y - other.y) ** 2);
  }
}
const points = [];
for (let i = 0; i < 100000; i++) {
  points.push(new Point(i, i));
}
// 1 function object on Point.prototype

Impact: ~100x less memory for methods. Consistent hidden class across all instances. Monomorphic property access guaranteed.

Maintain Property Order

// Before: different code paths create same properties in different orders
function fromAPI(response) {
  return { name: response.name, id: response.id }; // Shape: {name, id}
}
function fromDB(row) {
  return { id: row.id, name: row.name }; // Shape: {id, name} — DIFFERENT!
}

// After: same property order everywhere
function fromAPI(response) {
  return { id: response.id, name: response.name }; // Shape: {id, name}
}
function fromDB(row) {
  return { id: row.id, name: row.name }; // Shape: {id, name} — SAME
}

Impact: All downstream functions see one shape instead of two. Inline caches stay monomorphic.

Pattern 2: Packed Arrays

You already know why this matters from the element kinds topic. Let's see the patterns.

Build with push(), Not new Array(n)

// Before: pre-allocated holey array
function range(n) {
  const arr = new Array(n); // HOLEY_SMI_ELEMENTS
  for (let i = 0; i < n; i++) arr[i] = i;
  return arr; // Still holey even though every slot is filled
}

// After: built with push — always packed
function range(n) {
  const arr = [];  // PACKED_SMI_ELEMENTS
  for (let i = 0; i < n; i++) arr.push(i);
  return arr; // Packed — no hole checks ever
}

Impact: ~10-15% faster iteration because V8 skips hole checks on every element access.

Keep Array Types Homogeneous

// Before: mixed types in array
const data = [1, 2, 3.14, "four", null, { five: 5 }];
// PACKED_ELEMENTS — slowest packed kind, no type specialization

// After: separate arrays by type
const numbers = [1, 2, 3.14]; // PACKED_DOUBLE_ELEMENTS
const labels = ["four"];       // PACKED_ELEMENTS (strings)
const metadata = [{ five: 5 }]; // PACKED_ELEMENTS (objects)

Impact: Number arrays with PACKED_DOUBLE_ELEMENTS are stored as flat double buffers — cache-friendly and fast to iterate.

Avoid Holes

// Before: accidental holes
const sparse = [1, 2, 3];
sparse[100] = 4; // Creates holes at indices 3-99
// HOLEY_SMI_ELEMENTS — every access checks for holes

// After: dense array
const dense = [1, 2, 3];
dense.push(4); // No holes — PACKED_SMI_ELEMENTS

Pattern 3: Monomorphic Call Sites

This is probably the single highest-impact pattern on this entire page.

Normalize Shapes at Boundaries

// Before: function called with objects from many sources
function processEvent(event) {
  return event.type + ": " + event.target; // Megamorphic IC
}
// Called with MouseEvent, KeyboardEvent, CustomEvent, mock events...

// After: normalize at the boundary
function normalizeEvent(raw) {
  return { type: raw.type, target: raw.target };
}

function processEvent(event) {
  return event.type + ": " + event.target; // Monomorphic IC
}

// Call site: always normalize first
processEvent(normalizeEvent(mouseEvent));
processEvent(normalizeEvent(keyEvent));

Impact: Normalizing shapes before hot paths keeps inline caches monomorphic. The normalization cost is paid once; the benefit is multiplied by every subsequent access.

Separate Cold and Hot Functions

// Before: one function handles both startup parsing and runtime processing
function parseValue(value) {
  // Called with every type at startup (feedback pollution)
  // Called with only strings at runtime (but feedback is already wide)
  if (typeof value === 'string') return value.trim();
  if (typeof value === 'number') return String(value);
  if (typeof value === 'object') return JSON.stringify(value);
  return String(value);
}

// After: separate functions with clean feedback
function parseConfigValue(value) {
  // Cold path: diverse types, V8 won't optimize heavily
  if (typeof value === 'string') return value.trim();
  if (typeof value === 'number') return String(value);
  if (typeof value === 'object') return JSON.stringify(value);
  return String(value);
}

function parseUserInput(value) {
  // Hot path: always strings, TurboFan optimizes for strings
  return value.trim();
}

Impact: The hot path function builds clean type feedback, enabling TurboFan to generate optimal string-handling code.

Pattern 4: Type-Stable Hot Paths

Keep Types Consistent in Loops

// Before: type changes mid-loop
function accumulate(items) {
  let result = 0; // Starts as Smi
  for (const item of items) {
    result += item.value; // If item.value is ever a float, result becomes HeapNumber
    if (result > 1000000) result = result.toFixed(2); // Now it's a string! Deopt.
  }
  return result;
}

// After: consistent types throughout
function accumulate(items) {
  let result = 0;
  for (const item of items) {
    result += item.value;
  }
  return result > 1000000 ? result.toFixed(2) : String(result);
  // Type change happens after the hot loop, not inside it
}

Impact: The loop stays in Smi/Number arithmetic throughout. TurboFan generates pure numeric machine code without deoptimization guards for string conversion.

Avoid Null/Undefined in Arithmetic Paths

// Before: null sneaks into arithmetic
function average(values) {
  let sum = 0;
  for (const v of values) {
    sum += v; // If v is null/undefined, sum becomes NaN -> deopt potential
  }
  return sum / values.length;
}

// After: guard before arithmetic
function average(values) {
  let sum = 0;
  let count = 0;
  for (const v of values) {
    if (v != null) {
      sum += v;
      count++;
    }
  }
  return count > 0 ? sum / count : 0;
}

Pattern 5: Avoid Dictionary Mode

Never Use delete

// Before: delete transitions to dictionary mode
function cleanResponse(response) {
  delete response._internal;
  delete response._metadata;
  return response; // Dictionary mode — all access is slower
}

// After: construct a new object
function cleanResponse(response) {
  const { _internal, _metadata, ...clean } = response;
  return clean; // Fresh object in fast mode
}

// Or, if you need to keep the same reference:
function cleanResponse(response) {
  response._internal = undefined;
  response._metadata = undefined;
  return response; // Still fast mode
}

Use Map for Dynamic Key Collections

// Before: using plain objects as dynamic dictionaries
const cache = {};
function addToCache(key, value) {
  cache[key] = value; // Hundreds of unique keys -> likely dictionary mode
}

// After: use Map — designed for dynamic keys
const cache = new Map();
function addToCache(key, value) {
  cache.set(key, value); // Map handles dynamic keys efficiently
}

Impact: Map is internally optimized for frequent additions and deletions. It won't transition to a "slow mode" — it's already using hash tables by design, with V8-optimized implementations.

Production Scenario: The Shape-Consistent Refactor

A data pipeline processes 500K events per second. Each event is a plain object created from JSON responses:

// Before: events from different sources had different shapes
// Source A: { type, timestamp, data }
// Source B: { type, ts, payload }
// Source C: { eventType, time, body }
// 3+ shapes through every processing function = megamorphic ICs everywhere

// After: normalize at ingestion
function normalizeEvent(raw, source) {
  return {
    type: raw.type ?? raw.eventType,
    timestamp: raw.timestamp ?? raw.ts ?? raw.time,
    data: raw.data ?? raw.payload ?? raw.body ?? null,
    source,
  };
}

// All downstream processing sees one shape
function processEvent(event) {
  // Monomorphic IC for every property access
  updateMetrics(event.type, event.timestamp);
  storeData(event.data);
}

The team measured a 4x throughput improvement -- from 500K events/sec to 2M events/sec -- by adding a single normalization step at the ingestion boundary. One function, 4x faster.

Common Mistakes

What developers doWhat they should do
Applying these patterns to cold code (runs < 100 times)
V8 only optimizes hot code. Restructuring cold code for V8 friendliness wastes developer time with zero benefit
Focus on hot loops and frequently-called functions. Cold code is never optimized by TurboFan anyway
Using TypeScript types and thinking V8 will benefit
TypeScript helps developer experience but V8 never sees the types. Runtime type stability is what matters
TypeScript types are erased at compile time. V8 only sees runtime types. Keep actual runtime values consistent
Using object spread for config merging in hot paths: { ...defaults, ...overrides }
Spread allocates a new object per call and the resulting shape depends on which overrides are present
Spread creates new objects each time. For hot paths, use a fixed-shape builder function
Optimizing without measuring. Guessing which code is hot
Developers are notoriously bad at guessing which code is hot. Let the profiler tell you
Profile first with --prof or Chrome DevTools. Optimize only measured hot spots

Quiz: V8-Friendly Patterns

Quiz
Which change would have the biggest performance impact on a function called 1M times per second?
Quiz
You're building an array of 10,000 numbers in a hot function. Which approach produces the fastest-to-iterate array?

Key Rules

Key Rules
  1. 1Initialize all object properties upfront, even optional ones (use null). Conditional property addition creates shape divergence.
  2. 2Use classes for objects created in bulk. Prototype methods are shared (1 copy), literal methods are duplicated (N copies).
  3. 3Build arrays with push() or Array.from(), never new Array(n). Avoid holes. Keep types homogeneous.
  4. 4Normalize object shapes at ingestion boundaries. One shape through the hot path = monomorphic everywhere.
  5. 5Never use delete. Set to undefined, destructure, or construct a new object.
  6. 6Split cold-path (diverse types) and hot-path (consistent types) logic into separate functions.
  7. 7Profile before optimizing. These patterns are free for new code, but don't refactor working code without measurements.