Skip to content

Latency, Bandwidth, and the Speed of Light

intermediate14 min read

The Number One Lie in Web Performance

"Just get faster internet." This is what most people think when a website feels slow. More bandwidth. Bigger pipe. The truth? For modern web applications, bandwidth almost never matters. Latency almost always does.

A 100 Mbps connection and a 1 Gbps connection will load the same webpage in roughly the same time. The HTML is 50KB. The CSS is 30KB compressed. The critical JS is 80KB. Even at 100 Mbps, those transfer in under 15ms. But if the server is 5000km away, the speed of light guarantees at least 50ms of latency per round trip — and with DNS, TCP, TLS, and HTTP, you're looking at 4-6 round trips before the first useful pixel appears. That's 200-300ms of pure waiting, regardless of bandwidth.

Bandwidth is for throughput. Latency is for responsiveness. And for web apps, responsiveness is everything.

The Mental Model

Mental Model

Bandwidth is the width of a highway — how many cars can drive side by side. Latency is the length of the highway — how long it takes any single car to reach the destination. Making the highway wider (more bandwidth) helps when you're shipping truckloads of cargo (large file downloads, video streaming). But if you're sending a series of short messages back and forth (DNS, TCP, TLS, small HTTP responses), what matters is how long each message takes to arrive. A 20-lane highway across the continent isn't faster than a 2-lane road to the server next door.

The Speed of Light: The Hard Floor

Light in a vacuum travels at 299,792 km/s. But we don't have vacuum cables. Light in fiber optic cable travels at roughly 200,000 km/s (about 2/3 the speed of light, due to the refractive index of glass).

Let's do the math for a round trip:

RouteDistanceOne-way latencyRound-trip (RTT)
San Francisco → London8,600 km43ms86ms
San Francisco → Tokyo8,300 km42ms84ms
New York → Sydney16,000 km80ms160ms
London → Singapore10,800 km54ms108ms
Same city~50 km0.25ms0.5ms

These are theoretical minimums — the speed of light through fiber, with no routing, switching, or processing delays. Real-world RTTs are 1.5-3x higher because packets travel through routers, switches, and don't follow a straight path.

Actual measured RTTs (typical):

RouteMeasured RTT
Same region (e.g., US East to US East)5-20ms
Cross-continent (US to Europe)80-120ms
Cross-ocean (US to Asia)150-250ms
Mobile network (same region)50-100ms
Satellite internet500-700ms
Quiz
Your origin server is in US-East. A user in Tokyo (measured RTT: 180ms) loads your page. The page requires 4 round trips (DNS, TCP, TLS, first HTTP response). What is the minimum latency before the first byte arrives?

Latency vs Bandwidth: When Each Matters

Bandwidth Matters For:

  • Video streaming — Netflix needs sustained throughput (5+ Mbps for HD, 25+ for 4K)
  • Large file downloads — transferring a 2GB file over 10 Mbps takes 27 minutes. Over 100 Mbps: 2.7 minutes
  • Backup and sync — cloud storage uploads are bandwidth-limited
  • Bulk data transfer — API responses returning megabytes of data

Latency Matters For:

  • Web page loads — dominated by round trips (DNS, TCP, TLS, HTTP) not data volume
  • API calls — a 2KB JSON response needs under 1ms of transfer but 50-200ms of round trip
  • Interactive applications — every user action triggers a round trip to the server
  • Real-time features — chat, collaboration, gaming need sub-100ms latency
  • Search — each keystroke might trigger a request; 200ms latency makes autocomplete feel broken

For typical web applications, the bottleneck is almost always latency, not bandwidth. The critical resources (HTML, CSS, key JS) are small. The problem is the number of sequential round trips required to fetch them.

The Math That Proves It

Consider loading a page with 200KB of critical resources (HTML + CSS + JS, compressed):

ConnectionBandwidth Transfer TimeLatency (4 RTTs at 100ms)Total
10 Mbps160ms400ms560ms
100 Mbps16ms400ms416ms
1 Gbps1.6ms400ms402ms

Going from 10 Mbps to 1 Gbps saves 158ms. Reducing latency from 100ms to 20ms RTT saves 320ms (4 x 80ms). Latency reduction has 2x more impact, even with the slowest broadband connection.

And on mobile with high latency (200ms RTT):

ConnectionBandwidth Transfer TimeLatency (4 RTTs at 200ms)Total
4G (20 Mbps)80ms800ms880ms
5G (200 Mbps)8ms200ms (lower latency)208ms

The 5G speed boost comes largely from lower latency, not just higher bandwidth.

Quiz
Which change would most improve the load time of a typical web page?

Practical Strategies to Fight Latency

You can't speed up light. But you can reduce the distance it travels and the number of trips it makes.

1. CDNs: Reduce Distance

Put your content close to users. A CDN with edge nodes in 200+ cities means the nearest server is typically within 50km — reducing RTT from 100ms+ to under 10ms.

<!-- Serve static assets from CDN edge -->
<link rel="stylesheet" href="https://cdn.example.com/style.a8f3e2.css">

2. Reduce Round Trips

Every sequential round trip is latency you can't hide:

  • DNS prefetch + preconnect — start DNS/TCP/TLS early, in parallel with HTML parsing
  • HTTP/2 multiplexing — one connection for all requests, no connection queuing
  • HTTP/3 (QUIC) — 1 RTT handshake instead of 2 (TCP + TLS)
  • Inline critical CSS — eliminate the round trip to fetch CSS
  • Server-side rendering — send ready HTML, no round trip for client-side data fetching

3. Preconnect and Prefetch

Tell the browser to start connections and fetch resources before they're needed:

<!-- Complete DNS + TCP + TLS for critical origins -->
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>

<!-- Resolve DNS for less-critical origins -->
<link rel="dns-prefetch" href="https://analytics.example.com">

<!-- Fetch resources the next page will need -->
<link rel="prefetch" href="/next-page-bundle.js">

<!-- Preload resources the current page needs ASAP -->
<link rel="preload" href="/critical-font.woff2" as="font" type="font/woff2" crossorigin>

4. Connection Reuse

Avoid paying the handshake cost multiple times:

  • HTTP/2 — single connection per origin, all requests multiplexed
  • Keep-alive — connections persist across requests (HTTP/1.1 default)
  • Connection pooling — browsers maintain a pool of connections to recently visited origins

5. Edge Computing

Move computation closer to the user. Instead of a user in Tokyo hitting a server in Virginia:

Without edge:  Tokyo user → Virginia server (180ms RTT) → response
With edge:     Tokyo user → Tokyo edge (5ms RTT) → response (for cacheable/edge-computable data)

Edge functions (Cloudflare Workers, Vercel Edge Functions) can handle authentication, personalization, A/B testing, and API routing with sub-10ms latency.

6. Reduce Payload Size (Latency * Bandwidth Interaction)

TCP slow start means the first data round trip carries only ~14KB. Keeping critical resources small reduces the number of round trips needed to transfer them:

50KB compressed → 4 slow start round trips
14KB compressed → 1 slow start round trip

That's 3 fewer round trips. At 100ms RTT, that's 300ms saved.

Common Trap

Compressing resources reduces bandwidth requirements but doesn't eliminate latency. A 1KB response still takes one full round trip. The minimum latency for any HTTP request is 1 RTT (the request travels to the server and the response travels back). Compression helps most when it brings resources under the 14KB TCP slow start threshold.

Measuring Latency

Browser DevTools

Chrome DevTools Network tab shows per-request timing breakdown:

  • Queueing — waiting for an available connection
  • DNS Lookup — DNS resolution time
  • Initial Connection — TCP handshake time
  • SSL — TLS handshake time
  • Waiting (TTFB) — Time to First Byte — includes server processing + round trip
  • Content Download — actual data transfer time

TTFB is the most telling metric. If TTFB is high but Content Download is fast, the problem is latency and/or server processing, not bandwidth.

Key Performance APIs

const timing = performance.getEntriesByType('navigation')[0];

console.log('DNS:', timing.domainLookupEnd - timing.domainLookupStart);
console.log('TCP:', timing.connectEnd - timing.connectStart);
console.log('TLS:', timing.secureConnectionStart > 0
  ? timing.connectEnd - timing.secureConnectionStart : 'N/A');
console.log('TTFB:', timing.responseStart - timing.requestStart);
console.log('Download:', timing.responseEnd - timing.responseStart);
Why satellite internet has terrible latency

Geostationary satellites orbit at 35,786 km altitude. A signal travels up to the satellite and back down: 2 x 35,786 = 71,572 km one way. Round trip: 143,144 km. At the speed of light: ~477ms minimum RTT. Add processing delays and you get 500-700ms RTT. This is why satellite internet (traditional geostationary like HughesNet and Viasat) is terrible for web browsing despite decent bandwidth (25-100 Mbps). SpaceX's Starlink uses Low Earth Orbit (550km altitude), achieving 20-40ms latency — dramatically better because the signal travels a much shorter distance.

Quiz
Two users both have 100 Mbps connections. User A has 20ms RTT to the server. User B has 200ms RTT. Loading the same page (4 round trips needed), approximately how much slower is User B's experience?

Common Mistakes

What developers doWhat they should do
Thinking faster internet (more bandwidth) will make websites load faster
Typical web pages have small critical resources (under 500KB compressed). At 10 Mbps, transfer takes ~400ms. At 100 Mbps: ~40ms. But 4 round trips at 100ms each adds 400ms regardless of bandwidth. Latency dominates.
Web page loads are latency-bound, not bandwidth-bound. Reducing RTT has far more impact than increasing bandwidth.
Ignoring the number of sequential round trips in page load
Each sequential round trip adds a full RTT to load time. On high-latency connections (mobile, cross-ocean), each RTT costs 100-200ms. Four unnecessary round trips can add 400-800ms.
Count and minimize sequential round trips: DNS, TCP, TLS, redirects, blocking resources
Using preload/prefetch/preconnect for everything
Each preconnect opens a TCP+TLS connection (CPU and memory cost). Prefetching resources the user never visits wastes bandwidth. Over-preloading contends with critical current-page resources for bandwidth.
Preconnect to 2-4 critical origins. Prefetch only resources likely needed next. Preload only current-page critical resources.

Key Takeaways

Key Rules
  1. 1Light in fiber travels at ~200,000 km/s. Cross-ocean round trips have a hard floor of 80-160ms that no optimization can reduce.
  2. 2Web page loads are latency-bound, not bandwidth-bound. Reducing RTT matters far more than increasing bandwidth for typical web apps.
  3. 3Each round trip (DNS, TCP, TLS, HTTP request) adds a full RTT of latency. Minimize sequential round trips by using CDNs, HTTP/2, preconnect, and inline critical resources.
  4. 4CDNs reduce the distance to the user, converting 100ms+ RTTs to under 10ms. This is the single most impactful latency optimization.
  5. 5TCP slow start limits the first data transfer to ~14KB. Keep critical resources under this threshold to fit in the first round trip.