Skip to content

Dedicated Workers: Lifecycle & Messaging

advanced20 min read

The Interview Question That Exposes Everything

Here's a question that trips up even senior engineers:

const worker = new Worker('worker.js');
worker.postMessage({ type: 'START', data: largeArray });
worker.postMessage({ type: 'STOP' });
console.log('Both messages sent');

Does the worker process START before STOP? Always? What if largeArray has 10 million items — does the second postMessage wait for the first one to finish serializing?

The answer reveals whether you understand the structured clone algorithm, the worker message queue, and a subtle gotcha: each postMessage call triggers a synchronous structured clone on the calling thread before the message is enqueued. If largeArray is huge, that first postMessage blocks the main thread during serialization — and the second call waits behind it. The worker will always receive START before STOP (messages are ordered), but your main thread is frozen during the clone.

Mental Model

Think of a Worker as a coworker in a separate office. You communicate by putting notes in their inbox (message queue). Each note is a photocopy of your document (postMessage uses structured clone — a deep copy). The photocopying happens at your desk (main thread) before the note goes in the inbox. If you photocopy a 500-page document, you're stuck at the copier. Your coworker processes notes in order, one at a time. They can't walk over and look at your screen (no DOM access) — everything goes through the inbox.

Creating a Worker

The Worker constructor takes a URL to a script file. The browser fetches and executes this script in a new thread:

// main.js
const worker = new Worker('/workers/data-processor.js');

The worker script runs in a completely separate global scope — DedicatedWorkerGlobalScope instead of Window. No document, no window, no DOM. But you get self, fetch, indexedDB, caches, crypto, WebSocket, setTimeout/setInterval, and importScripts.

Module Workers

Since Chrome 80+, Firefox 114+, and Safari 15+, you can use ES modules in workers:

const worker = new Worker('/workers/data-processor.js', {
  type: 'module'
});

Module workers give you import/export syntax, strict mode by default, and top-level await. They also avoid polluting the global scope — something importScripts (the classic approach) does by design.

// worker.js (module worker)
import { processChunk } from './utils/processing.js';
import { validate } from './utils/validation.js';

self.onmessage = (event) => {
  const validated = validate(event.data);
  const result = processChunk(validated);
  self.postMessage(result);
};
Module Worker Browser Support

Module workers work in Chrome 80+, Firefox 114+, and Safari 15+. If you need to support older browsers, use classic workers with importScripts(). Most modern bundlers (Vite, webpack 5+) handle worker bundling and can output classic workers even from module source.

Quiz
What is the main advantage of module workers over classic workers?

The Messaging Protocol

Communication between the main thread and a worker happens through postMessage and the message event. Let's trace the full lifecycle:

// main.js
const worker = new Worker('/workers/math.js', { type: 'module' });

worker.onmessage = (event) => {
  console.log('Result:', event.data);
};

worker.onerror = (event) => {
  console.error('Worker error:', event.message);
};

worker.postMessage({ operation: 'fibonacci', n: 40 });
// workers/math.js
function fibonacci(n) {
  if (n <= 1) return n;
  return fibonacci(n - 1) + fibonacci(n - 2);
}

self.onmessage = (event) => {
  const { operation, n } = event.data;
  if (operation === 'fibonacci') {
    const result = fibonacci(n);
    self.postMessage({ result, operation });
  }
};
Execution Trace
Worker creation
new Worker('/workers/math.js')
Browser spawns a new OS thread, fetches and parses the script
Register handler
worker.onmessage = ...
Main thread registers a handler for messages from the worker
Send message
worker.postMessage({...})
Structured clone serializes the object on the main thread, enqueues it in the worker's message queue
Worker receives
self.onmessage fires
Worker thread dequeues the message, calls the handler with the deserialized data
Worker computes
fibonacci(40) runs
Runs entirely on the worker thread — main thread is free to handle UI
Worker responds
self.postMessage({result})
Result is structured-cloned and enqueued on the main thread's message queue
Main receives
worker.onmessage fires
Main thread dequeues the message, handler runs with the result

Message Ordering Guarantees

Messages between a worker and its parent are always delivered in order. If you send A then B, the worker receives A before B. This is guaranteed by the spec — the underlying message port uses a FIFO queue.

worker.postMessage('first');
worker.postMessage('second');
worker.postMessage('third');
// Worker ALWAYS receives: 'first', 'second', 'third' — in that order

However, if you have multiple workers, there's no ordering guarantee between them.

Quiz
You send three messages to a worker in sequence using worker.postMessage. The worker processes each and sends a response. In what order does the main thread receive the responses?

The Structured Clone Cost

This is the part most tutorials skip, and it's the most important performance consideration. Every postMessage call triggers the structured clone algorithm — a deep copy of the data being sent.

// This creates a FULL COPY of the object on each postMessage
const data = {
  users: new Array(100_000).fill(null).map((_, i) => ({
    id: i,
    name: `User ${i}`,
    scores: [Math.random(), Math.random(), Math.random()],
  })),
};

performance.mark('clone-start');
worker.postMessage(data); // Structured clone happens HERE, on the main thread
performance.mark('clone-end');
performance.measure('clone-cost', 'clone-start', 'clone-end');
// For 100K objects with nested arrays: ~50-150ms depending on device

Structured clone handles most JavaScript types — objects, arrays, Maps, Sets, Dates, RegExps, Blobs, ArrayBuffers, even cyclic references. But it cannot clone:

  • Functions (throws DataCloneError)
  • DOM nodes
  • Property descriptors, getters/setters
  • Prototype chains (you get plain objects back)
  • Symbols
Common Trap

The structured clone happens synchronously on the calling thread. When the main thread calls worker.postMessage(bigData), the main thread is blocked during serialization. This means a 100ms clone operation blocks the main thread for 100ms — defeating the purpose of using a Worker in the first place. The fix is transferable objects (next topic) or keeping messages small.

Quiz
What happens to the main thread when you call worker.postMessage with a 50MB nested object?

Error Handling

Worker errors surface in two ways:

// 1. The 'error' event — uncaught exceptions in the worker
worker.onerror = (event) => {
  console.error(`Worker error in ${event.filename}:${event.lineno}`);
  console.error(event.message);
  event.preventDefault(); // Prevents the error from propagating to window.onerror
};

// 2. The 'messageerror' event — deserialization failure
worker.onmessageerror = (event) => {
  console.error('Failed to deserialize worker message');
};

In the worker itself, you can catch errors before they bubble:

// worker.js
self.onmessage = (event) => {
  try {
    const result = riskyOperation(event.data);
    self.postMessage({ status: 'success', result });
  } catch (error) {
    self.postMessage({
      status: 'error',
      message: error.message,
      stack: error.stack,
    });
  }
};

The second pattern (try/catch inside the worker, sending error data via postMessage) is more reliable in production because it gives you structured error information. The onerror event only provides message, filename, and lineno — no stack trace, no custom error data.

Worker Termination

Workers can be terminated from either side:

// From main thread — immediate, non-graceful
worker.terminate();

// From worker — also immediate
self.close();

worker.terminate() kills the worker instantly. Any in-progress computation is abandoned. Any pending messages in the worker's queue are discarded. This is fine for "cancel this operation" but dangerous if the worker holds resources (IndexedDB transactions, open streams).

For graceful shutdown, use a message protocol:

// main.js
worker.postMessage({ type: 'SHUTDOWN' });
worker.onmessage = (event) => {
  if (event.data.type === 'SHUTDOWN_COMPLETE') {
    worker.terminate();
  }
};

// worker.js
self.onmessage = (event) => {
  if (event.data.type === 'SHUTDOWN') {
    cleanupResources();
    self.postMessage({ type: 'SHUTDOWN_COMPLETE' });
  }
};
Quiz
What happens to a pending IndexedDB transaction inside a worker when you call worker.terminate()?

Raw postMessage gets tedious fast. Every operation needs a message type, a handler, and serialization logic. Comlink (1.1kB from Google Chrome Labs) turns a worker into a transparent proxy — you call methods on it like a local object:

// worker.js
import { expose } from 'comlink';

const api = {
  fibonacci(n) {
    if (n <= 1) return n;
    return api.fibonacci(n - 1) + api.fibonacci(n - 2);
  },
  async processData(records) {
    return records.map(expensiveTransform);
  },
};

expose(api);
// main.js
import { wrap } from 'comlink';

const worker = new Worker('/workers/math.js', { type: 'module' });
const api = wrap(worker);

const result = await api.fibonacci(40);
console.log(result);

Comlink uses Proxy and MessageChannel under the hood. Every method call becomes a postMessage round-trip, so it adds a small overhead per call. But for coarse-grained operations (one call that does substantial work), the ergonomic benefit is enormous.

Comlink Transfer Support

Comlink supports transferable objects via Comlink.transfer(). Wrap any value with Comlink.transfer(value, [transferables]) to avoid the structured clone cost for ArrayBuffers and other transferable types.

Worker Pooling Pattern

Creating a worker takes 5-20ms (script fetch, parse, compile). If you're spawning workers for short tasks, the startup cost dominates. A worker pool keeps workers alive and dispatches tasks to them:

class WorkerPool {
  #workers;
  #queue;
  #available;

  constructor(url, size = navigator.hardwareConcurrency || 4) {
    this.#workers = Array.from({ length: size }, () => new Worker(url, { type: 'module' }));
    this.#available = [...this.#workers];
    this.#queue = [];
  }

  exec(data) {
    return new Promise((resolve, reject) => {
      const task = { data, resolve, reject };
      const worker = this.#available.pop();
      if (worker) {
        this.#dispatch(worker, task);
      } else {
        this.#queue.push(task);
      }
    });
  }

  #dispatch(worker, task) {
    worker.onmessage = (event) => {
      task.resolve(event.data);
      const next = this.#queue.shift();
      if (next) {
        this.#dispatch(worker, next);
      } else {
        this.#available.push(worker);
      }
    };
    worker.onerror = (event) => {
      task.reject(new Error(event.message));
      const next = this.#queue.shift();
      if (next) {
        this.#dispatch(worker, next);
      } else {
        this.#available.push(worker);
      }
    };
    worker.postMessage(task.data);
  }

  terminate() {
    this.#workers.forEach((w) => w.terminate());
  }
}

Usage:

const pool = new WorkerPool('/workers/data-processor.js', 4);

const results = await Promise.all(
  chunks.map((chunk) => pool.exec(chunk))
);
Quiz
Why use a worker pool instead of creating a new worker for each task?
What developers doWhat they should do
Creating a new Worker for every user action (click, keystroke)
Worker creation costs 5-20ms per instance. Creating and destroying workers per action adds latency that offsets the parallelism benefit. Keep workers alive and dispatch tasks to them.
Create workers at app startup or lazily on first use, then reuse them via a pool
Sending large objects through postMessage without considering clone cost
Structured clone is synchronous on the main thread. A 50MB object clone can block the main thread for hundreds of milliseconds — worse than just doing the computation on the main thread.
Use transferable objects for ArrayBuffers, or restructure data to minimize what needs to be cloned
Using worker.terminate() for routine cleanup
terminate() kills the worker instantly, aborting any in-progress IndexedDB transactions, fetch requests, or cleanup logic. Use it only for hard cancellation when graceful shutdown fails or times out.
Send a shutdown message and wait for the worker to confirm cleanup is complete
Not handling errors from workers
Unhandled worker errors can silently fail. The onerror event provides limited information. Wrapping worker logic in try/catch and sending error details via postMessage gives you stack traces and custom error context in production.
Listen for both the error event and send structured error data via postMessage from try/catch blocks

Challenge: Build a Cancellable Worker

Challenge: Cancellable Worker Task

Try to solve it before peeking at the answer.

javascript
// Build a pattern where the main thread can cancel an in-progress
// worker computation. The worker should check for cancellation
// periodically and abort early if requested.
//
// Constraints:
// - The worker processes an array of 1 million items
// - Processing takes ~2 seconds total
// - The user can click "Cancel" at any time
// - After cancellation, the worker should stop within 100ms
// - The main thread should receive partial results

// main.js
const worker = new Worker('/workers/processor.js');

function startProcessing(data) {
// Your code: send data and handle results
}

function cancelProcessing() {
// Your code: signal the worker to stop
}

// worker.js
self.onmessage = (event) => {
// Your code: process data with cancellation support
};

Key Rules

Key Rules
  1. 1Workers run in a separate OS thread with their own global scope (DedicatedWorkerGlobalScope). No DOM, no window, no document.
  2. 2postMessage triggers synchronous structured clone on the calling thread. Large objects block the sender. Keep messages small or use transferables.
  3. 3Messages between a worker and its parent are always delivered in FIFO order. Multiple workers have no ordering guarantees between them.
  4. 4Module workers (type: 'module') support import/export, strict mode, and top-level await. Supported in all modern browsers.
  5. 5Pool workers for repeated tasks. Worker creation costs 5-20ms — amortize it across many operations, not one per user action.