WebAssembly Integration
What WebAssembly Actually Is
WebAssembly is not a programming language. It's a binary instruction format — a compact, efficient bytecode that browsers execute at near-native speed. You write code in C, C++, Rust, Go, or any language with a WASM compiler, and the browser runs it alongside JavaScript.
The key insight: WASM does not replace JavaScript. It complements it. JavaScript handles the UI, DOM, events, and orchestration. WASM handles the heavy lifting — image codecs, physics engines, crypto, video processing, the compute-intensive inner loops that JavaScript is too slow for.
Figma's rendering engine is WASM (compiled from C++). Photoshop on the web is WASM. Google Earth is WASM. AutoCAD is WASM. These are not toy demos — they are production apps processing gigabytes of data at native speed in the browser.
Think of WebAssembly as a foreign consultant you bring into your JavaScript team. They speak a different language (binary bytecode) and have their own workspace (linear memory). Communication happens through a defined interface (imports/exports) — you pass numbers back and forth, and they do the heavy computation. They cannot touch your office supplies (the DOM) directly, but they are extremely fast at their job.
How WASM Works
The Compilation Flow
Loading a WASM Module
const response = await fetch('/computation.wasm');
const bytes = await response.arrayBuffer();
const { instance } = await WebAssembly.instantiate(bytes, importObject);
const result = instance.exports.fibonacci(40);
Or using the streaming API (preferred — starts compiling while downloading):
const { instance } = await WebAssembly.instantiateStreaming(
fetch('/computation.wasm'),
importObject
);
instantiateStreaming is faster because the browser compiles the WASM bytecode as it arrives over the network, instead of waiting for the full download. Always prefer it when serving WASM with Content-Type: application/wasm.
The Memory Model: Linear Memory
WASM modules operate on linear memory — a contiguous ArrayBuffer that both WASM and JavaScript can access:
const memory = new WebAssembly.Memory({ initial: 256 }); // 256 pages = 16MB
const { instance } = await WebAssembly.instantiate(bytes, {
env: { memory },
});
const buffer = new Uint8Array(memory.buffer);
buffer[0] = 42;
Each page is 64KB. Linear memory can grow (up to a maximum you specify) but never shrink. WASM reads and writes to this buffer using pointer arithmetic — just like C.
Passing Complex Data
WASM functions can only take and return numbers (i32, i64, f32, f64). To pass strings, arrays, or objects, you write them into linear memory and pass the pointer:
function passStringToWasm(instance, str) {
const encoder = new TextEncoder();
const bytes = encoder.encode(str);
const ptr = instance.exports.alloc(bytes.length + 1);
const memory = new Uint8Array(instance.exports.memory.buffer);
memory.set(bytes, ptr);
memory[ptr + bytes.length] = 0; // null terminator
return ptr;
}
This manual memory management is tedious. Toolchains like Emscripten and wasm-bindgen generate glue code to handle it.
JS-WASM Interop
Exports: WASM Functions Callable from JS
// In C
#include <emscripten.h>
EMSCRIPTEN_KEEPALIVE
int add(int a, int b) {
return a + b;
}
const result = instance.exports.add(3, 4); // 7
Imports: JS Functions Callable from WASM
const importObject = {
env: {
log: (value) => console.log('From WASM:', value),
getTime: () => performance.now(),
},
};
const { instance } = await WebAssembly.instantiate(bytes, importObject);
The WASM module declares which imports it needs. If you don't provide them, instantiation fails.
Emscripten: C/C++ to WASM
Emscripten is the mature toolchain for compiling C/C++ to WASM. It provides a complete POSIX-like environment:
emcc physics.c -o physics.js -s WASM=1 -O3 \
-s EXPORTED_FUNCTIONS='["_simulate", "_malloc", "_free"]' \
-s EXPORTED_RUNTIME_METHODS='["ccall", "cwrap"]'
Emscripten generates two files: a .wasm binary and a .js glue file that handles loading, memory management, and API bridging.
const simulate = Module.cwrap('simulate', 'number', ['number', 'number']);
const result = simulate(1000, 0.016);
cwrap creates a JavaScript wrapper around the WASM function with proper type conversion.
What Emscripten Provides
- File system emulation (
FSmodule) - OpenGL to WebGL translation
- pthreads to Web Workers + SharedArrayBuffer
- SDL to Canvas/WebAudio
- Memory management (
malloc/free) - Exception handling
wasm-bindgen: Rust to WASM
Rust has first-class WASM support. wasm-bindgen generates high-level bindings that feel native:
use wasm_bindgen::prelude::*;
#[wasm_bindgen]
pub fn fibonacci(n: u32) -> u64 {
match n {
0 => 0,
1 => 1,
_ => {
let mut a: u64 = 0;
let mut b: u64 = 1;
for _ in 2..=n {
let temp = a + b;
a = b;
b = temp;
}
b
}
}
}
#[wasm_bindgen]
pub fn process_image(data: &[u8], width: u32, height: u32) -> Vec<u8> {
// Image processing in Rust — returns result to JS
data.iter().map(|&p| 255 - p).collect()
}
wasm-pack build --target web
import init, { fibonacci, process_image } from './pkg/my_crate.js';
await init();
console.log(fibonacci(40)); // instant
wasm-bindgen handles string conversion, vector passing, error propagation, and even lets you call DOM APIs from Rust (through web-sys and js-sys crates).
Performance: Expectations vs Reality
When WASM Is Faster Than JavaScript
- CPU-bound computation: parsing, encoding/decoding, hashing, compression
- Predictable performance: no GC pauses, no JIT warmup, no deoptimization
- Tight loops over typed data: image pixels, audio samples, physics vectors
- Existing C/C++/Rust codebases: port rather than rewrite
When WASM Is NOT Faster
- DOM manipulation: WASM cannot touch the DOM. Every DOM call goes through JS, adding overhead.
- Simple operations: for trivial functions, the JS-WASM boundary crossing negates any speed gain.
- I/O-bound work: network requests, file reads — the bottleneck is not computation.
- Code that V8 already optimizes well: modern JS engines are incredibly fast for idiomatic JS. A simple for loop over numbers can match WASM speed.
The Boundary Cost
Every call between JS and WASM has overhead. A function that takes 1 microsecond to execute but is called 1 million times per frame will spend more time crossing the boundary than computing. Minimize boundary crossings — do the work in one big WASM call, not a million small ones.
// Bad: 1M boundary crossings per frame
for (let i = 0; i < 1000000; i++) {
result[i] = instance.exports.processPixel(pixels[i]);
}
// Good: 1 boundary crossing per frame
instance.exports.processAllPixels(pixelPtr, 1000000);
The WasmGC proposal
WebAssembly 3.0 (standardized September 2025) includes WasmGC — garbage collection built into the WASM runtime. Before WasmGC, languages like Java, Kotlin, and Dart had to ship their own GC implementation inside the WASM module, bloating binary sizes by megabytes. With WasmGC, these languages use the host's optimized GC directly. Kotlin/Wasm and Dart/Wasm already use WasmGC in production, producing dramatically smaller binaries.
Real-World Use Cases
| Product | What WASM Does |
|---|---|
| Figma | Rendering engine (C++ via Emscripten) |
| Photoshop Web | Image processing, filters, layer compositing |
| Google Earth | 3D terrain rendering |
| AutoCAD | CAD engine |
| Squoosh | Image compression (codecs compiled to WASM) |
| FFmpeg.wasm | Video encoding/decoding in the browser |
| SQLite WASM | Full SQL database client-side |
WASM + Workers: The Full Picture
For maximum performance, run WASM in a Web Worker to keep the main thread free:
const worker = new Worker('/wasm-worker.js');
worker.postMessage({ type: 'process', data: imageBuffer }, [imageBuffer]);
worker.onmessage = (event) => {
displayResult(event.data.result);
};
Inside the worker:
let wasmInstance;
async function init() {
const { instance } = await WebAssembly.instantiateStreaming(
fetch('/image-processor.wasm')
);
wasmInstance = instance;
}
self.onmessage = async (event) => {
if (!wasmInstance) await init();
const { type, data } = event.data;
const result = processInWasm(wasmInstance, data);
self.postMessage({ result }, [result]);
};
WASM + Worker + Transferable objects = heavy computation off the main thread with zero-copy data passing.
| What developers do | What they should do |
|---|---|
| Thinking WASM is always faster than JavaScript Modern JS engines (V8, SpiderMonkey) produce highly optimized machine code for hot loops. WASM's advantage is in avoiding GC pauses, JIT warmup, and deoptimization — not in raw instruction throughput for simple operations. | WASM excels at CPU-bound computation and predictable performance. For DOM work, I/O, and simple logic, JavaScript is as fast or faster. |
| Calling WASM functions in a tight loop from JavaScript Each JS-to-WASM call has fixed overhead (type checking, context switching). For 1M calls per frame, this overhead dominates. Pass all data in one call and let WASM loop internally. | Batch work into a single WASM call to minimize boundary crossing overhead |
| Using WebAssembly.instantiate instead of instantiateStreaming instantiate waits for the full download before compiling. instantiateStreaming compiles as bytes arrive, cutting load time significantly for large modules. Requires Content-Type: application/wasm. | Use instantiateStreaming with fetch() for parallel download and compilation |
- 1WASM is a binary instruction format, not a language — compile from C/C++/Rust/Go to run at near-native speed in the browser
- 2WASM communicates with JS through numeric exports/imports and shared linear memory (ArrayBuffer)
- 3Use instantiateStreaming for parallel download + compilation of WASM modules
- 4Minimize JS-WASM boundary crossings — batch work into single calls
- 5Run WASM in Web Workers for CPU-heavy tasks to keep the main thread responsive