Edge Rendering
Render Where Your Users Are
Traditional servers live in one or a few data centers. A user in Tokyo requesting a page from a server in Virginia adds ~150ms of network latency — just for the round trip. Edge rendering moves the server closer to the user by executing code on CDN edge nodes distributed globally.
Instead of one server in Virginia, you have 200+ micro-servers worldwide. The user in Tokyo hits an edge node in Tokyo. The user in Berlin hits an edge node in Frankfurt. The round-trip drops from 150ms to 10ms.
Think of pizza delivery. Traditional SSR is one centralized pizza kitchen downtown — everyone orders from the same place, and delivery time depends on how far away you live. Edge rendering is a franchise model with kitchens in every neighborhood. Your pizza is made at the closest location, so delivery is fast regardless of where you are. The catch: each franchise kitchen is smaller and can't make every menu item (API limitations).
V8 Isolates: Not Containers, Not VMs
Edge runtimes don't use containers or VMs. They use V8 isolates — lightweight execution environments within Google's V8 JavaScript engine.
A V8 isolate is a sandboxed instance of V8 with its own heap and execution context. Thousands of isolates can share a single process, each completely isolated from the others.
Traditional Server:
VM or Container → Node.js process → your code
Cold start: 500ms - 5s
Memory: 128MB - 1GB per instance
Edge Runtime (V8 Isolate):
Single V8 process → thousands of isolates → your code
Cold start: <5ms
Memory: ~128KB per isolate
The cold start difference is dramatic. A container-based serverless function (AWS Lambda) can take 500ms-5s to cold start. A V8 isolate cold starts in under 5 milliseconds. For SSR on the edge, this means the first request to a region has nearly zero overhead.
What Edge Runtimes Can and Cannot Do
Edge runtimes are not Node.js. They implement a subset of Web APIs — closer to a Service Worker than a server.
What you have:
fetch()— HTTP requestsRequest/Response— standard web APIsURL,URLSearchParamsTextEncoder/TextDecodercrypto.subtle— Web Crypto APIReadableStream/WritableStreamsetTimeout/setInterval(with limits)Headers,FormData,Blob- WebAssembly
What you don't have:
fs— no file system accesschild_process— no spawning processesnet,dgram— no raw socketsnode:buffermodule import — though the globalBufferis available on platforms like Vercel Edge Runtime- Most Node.js built-in modules
- Native addons (C/C++ modules)
- Long-running connections (WebSocket limitations vary)
export const runtime = 'edge'
export default async function handler(request: Request) {
const data = await fetch('https://api.example.com/data')
const json = await data.json()
return new Response(JSON.stringify(json), {
headers: { 'Content-Type': 'application/json' }
})
}
This works on the edge because it only uses fetch, Request, Response, and JSON — all standard web APIs.
import { readFile } from 'fs/promises'
export const runtime = 'edge'
export default async function handler() {
const data = await readFile('./data.json', 'utf-8')
return new Response(data)
}
This fails on the edge. No fs module. If your code needs the file system, you need the Node.js runtime.
Many popular npm packages depend on Node.js APIs internally. A package might look like a pure JavaScript utility but call Buffer.from() or process.env deep in its dependency tree. Always test your edge functions with the actual edge runtime — what works in Node.js development might crash at the edge. Next.js will warn you during build if an edge-incompatible module is detected.
Edge Use Cases
1. Middleware: Auth, Redirects, A/B Testing
The highest-value edge use case. Middleware runs before any rendering, making decisions at the edge:
import { NextResponse } from 'next/server'
import type { NextRequest } from 'next/server'
export function middleware(request: NextRequest) {
const country = request.geo?.country || 'US'
if (country === 'DE') {
return NextResponse.redirect(new URL('/de', request.url))
}
const bucket = request.cookies.get('ab-test')?.value
if (!bucket) {
const newBucket = Math.random() < 0.5 ? 'control' : 'variant'
const response = NextResponse.next()
response.cookies.set('ab-test', newBucket)
return response
}
return NextResponse.next()
}
This middleware runs on every request, globally, in under 5ms. It handles geo-based redirects and A/B test bucketing before the page even starts rendering.
2. Edge SSR: Render Close to the User
For dynamic pages where TTFB matters, edge SSR renders HTML at the nearest edge node:
export const runtime = 'edge'
export default async function Page() {
const data = await fetch('https://api.example.com/products', {
next: { revalidate: 60 }
})
const products = await data.json()
return (
<main>
<h1>Products</h1>
{products.map(p => <ProductCard key={p.id} {...p} />)}
</main>
)
}
The page renders at the edge, fetching data from the API. If the API is also globally distributed, the total latency is minimal.
3. Personalization at the Edge
Combine edge rendering with cookies or headers for lightweight personalization without a full origin server round trip:
export const runtime = 'edge'
export default async function Page() {
const headersList = await headers()
const locale = headersList.get('accept-language')?.split(',')[0] || 'en'
const content = await fetch(`https://cms.example.com/home?locale=${locale}`)
return <HomePage content={await content.json()} />
}
The Edge Platform Landscape
| Platform | Runtime | Cold Start | Strengths | Limitations |
|---|---|---|---|---|
| Cloudflare Workers | V8 isolates (workerd) | Under 5ms | Largest edge network (300+ cities), Workers KV, Durable Objects, R2 storage | No Node.js APIs, 128MB memory, CPU time limits |
| Vercel Edge Functions | V8 isolates (Edge Runtime) | Under 5ms | Deep Next.js integration, seamless deployment, streaming SSR | Dependent on Vercel infrastructure, limited debugging tools |
| Deno Deploy | V8 isolates (Deno runtime) | Under 5ms | TypeScript native, web-standard APIs, built-in KV store | Smaller edge network, newer ecosystem |
| AWS Lambda@Edge | Node.js containers | 100ms-5s | AWS ecosystem integration, CloudFront distribution | Container cold starts, higher latency, complex deployment |
| Fastly Compute | WebAssembly (Wasm) | Under 1ms | Sub-millisecond cold start, language-agnostic via Wasm | Different programming model, smaller developer community |
When to Use Edge vs Node.js
Edge and PPR: the perfect combination
PPR and edge rendering complement each other beautifully. The static shell is cached on the CDN edge — served to users worldwide in under 50ms. The dynamic parts stream from the origin server (which has database access and full Node.js capabilities). The edge node concatenates the cached shell with the streamed dynamic content into a single response. Users get instant static content from the edge and fresh dynamic content from the origin, all in one seamless response.
| What developers do | What they should do |
|---|---|
| Put all pages on the edge for better performance Edge runtimes have API limitations. A page that queries a regional database gains nothing from edge rendering — the data fetch still goes to one region. | Only use edge for pages that benefit from global distribution and don't need Node.js APIs |
| Assume edge cold starts are similar to Lambda cold starts Fundamentally different technology. Isolates share a V8 process. Containers boot an entire OS and runtime. Don't apply Lambda mental models to edge functions. | V8 isolate cold starts are under 5ms. Container-based cold starts are 100ms-5s. |
| Test edge functions only in Node.js development mode Node.js has many APIs that edge runtimes lack. Code that works perfectly in dev can crash on the edge because of a transitive dependency on Buffer, fs, or process. | Test with the actual edge runtime to catch API compatibility issues |
| Use edge SSR when the data source is in a single region Edge SSR adds a network hop from the edge to your regional database. A server in the same region as the database is often faster for data-heavy pages. | Co-locate your server with your data source, or use CDN caching instead |
- 1Edge runtimes use V8 isolates — lightweight, near-zero cold start, but limited to Web APIs (no Node.js modules).
- 2Edge rendering reduces latency by moving compute close to users. Best for middleware, auth, A/B tests, and pages with globally distributed data.
- 3If your data lives in one region, edge SSR adds latency (edge-to-origin hop). Co-locate server with data instead.
- 4Middleware is the highest-value edge use case — runs on every request, globally, in milliseconds.
- 5Always test edge functions with the actual edge runtime. Node.js dev mode hides API compatibility issues.
- 6PPR + edge is the ideal combination: static shell from edge cache, dynamic parts streamed from origin.