Skip to content

HTTP/3, QUIC, and Zero-RTT

intermediate14 min read

Why Rebuild the Entire Transport Layer?

HTTP/2 was a massive improvement over HTTP/1.1. Multiplexing, header compression, stream priorities — all great. But it had one fatal flaw it couldn't fix: TCP.

When a single TCP packet gets lost, every HTTP/2 stream on that connection freezes. On a mobile network with 2% packet loss, this happens constantly. The irony? HTTP/2's single-connection design made this worse than HTTP/1.1's 6 separate connections, because with HTTP/1.1 a packet loss only affected one of the six connections.

The HTTP/3 team looked at this problem and realized: you can't fix TCP. It's baked into every operating system kernel, every router, every middlebox on the internet. Changing TCP requires decades of OS updates and hardware refreshes.

So they built something new on top of UDP. They called it QUIC.

The Mental Model

Mental Model

TCP is like a single conveyor belt carrying packages for multiple customers. If one package jams the belt, everything behind it stops, even packages for other customers. QUIC gives each customer their own independent conveyor belt. If your package jams your belt, everyone else's packages keep moving. And the security checkpoint (TLS) is built right into the belt system instead of being a separate step.

QUIC: Not Just "UDP With Reliability"

QUIC (originally "Quick UDP Internet Connections") runs on top of UDP, but it's not raw UDP. It implements everything TCP does — reliable delivery, congestion control, flow control — plus things TCP can't do.

What QUIC Does Differently

1. Independent stream multiplexing — QUIC streams are truly independent at the transport layer. A lost packet on stream 3 only blocks stream 3. Streams 1, 5, and 7 keep flowing. This is the single biggest win over HTTP/2.

2. Built-in TLS 1.3 — encryption isn't optional and isn't a separate layer. QUIC integrates TLS 1.3 directly into the handshake, so the transport handshake and the crypto handshake happen simultaneously.

3. Connection migration — TCP connections are identified by a 4-tuple (source IP, source port, destination IP, destination port). Change your WiFi network? New IP address? TCP connection dies. QUIC uses a connection ID that survives network changes. Switch from WiFi to cellular and your streams keep going.

4. Userspace implementation — TCP lives in the OS kernel. Changing it requires OS updates that take years to deploy. QUIC runs in userspace (the application), so updates ship as fast as browser updates — every 4-6 weeks.

Quiz
What is QUIC's primary advantage over TCP for HTTP/2-style multiplexing?

The Handshake: 1-RTT and 0-RTT

With TCP + TLS 1.2, a new secure connection costs 3 RTTs. TCP + TLS 1.3 brought it down to 2 RTTs. QUIC does it in 1 RTT for new connections.

New Connection: 1-RTT

QUIC combines the transport handshake and TLS 1.3 handshake into a single round trip:

Execution Trace
Client Initial
Client sends QUIC handshake + TLS ClientHello + key share in one packet
Transport and crypto combined
Server Response
Server sends handshake completion + TLS parameters + can start sending encrypted data
1 RTT total
Data flows
Both sides have encryption keys. Application data flows immediately.
Saved 1 RTT vs TCP + TLS 1.3
TCP + TLS 1.3:   1 RTT (TCP) + 1 RTT (TLS) = 2 RTTs
QUIC:            1 RTT (combined)             = 1 RTT
Savings:         1 RTT (50-150ms on typical connections)

Returning Connection: 0-RTT

If you've connected to a server before, QUIC can resume with zero round trips. The client sends encrypted application data in the very first packet using a previously established PSK (Pre-Shared Key):

First visit:     1 RTT  (QUIC handshake)
Return visit:    0 RTT  (data in first packet)

The browser sends the request and the key material simultaneously. The server can start processing the request and sending the response immediately.

Common Trap

0-RTT data has the same replay attack vulnerability as TLS 1.3's 0-RTT. An attacker who captures the first packet can replay it, causing the server to process the request again. QUIC 0-RTT should only carry idempotent requests (GET). Most implementations reject 0-RTT for POST, PUT, DELETE. Cloudflare and other CDNs handle this at the edge, but if you're building server-side QUIC handling, you must implement replay protection.

Quiz
How many round trips does a brand-new QUIC connection require before application data can flow?

Connection Migration

This is the feature mobile users benefit from most and barely anyone talks about.

With TCP, a connection is identified by 4 values: source IP, source port, destination IP, destination port. If any of these change, the connection breaks. Reconnecting means a new TCP handshake and TLS handshake.

When does your IP change?

  • Switching from WiFi to cellular
  • Moving between WiFi access points
  • Cellular tower handoff
  • VPN connecting/disconnecting

With TCP, each of these kills your connection. With QUIC, the connection survives because QUIC uses a connection ID — a random identifier that both sides remember. When your IP changes, QUIC migrates the connection to the new path seamlessly.

TCP:   (192.168.1.5:43210 ↔ 93.184.216.34:443) = connection identity
       IP changes → connection dead → 2 RTT reconnection

QUIC:  Connection ID: 0x8a3f... = connection identity
       IP changes → same connection, new path → 0 RTT migration

For a user on a train browsing your web app, connection migration means they don't get a spinner every time the cellular tower changes.

Browser and Server Support

HTTP/3 has wide adoption as of 2026:

PlatformHTTP/3 Support
ChromeSince Chrome 87 (2020)
FirefoxSince Firefox 88 (2021)
SafariSince Safari 16 (2022)
EdgeSince Edge 87 (follows Chrome)
Node.jsExperimental (via undici/quiche)
CloudflareFull support (enabled by default)
AWS CloudFrontSupported
NginxSupported since 1.25.0
CaddyBuilt-in QUIC support
Graceful fallback is automatic

Browsers discover HTTP/3 support via the Alt-Svc header in an HTTP/2 response. The first visit to a site uses HTTP/2. If the server advertises QUIC support, subsequent connections use HTTP/3. If QUIC is blocked (some corporate firewalls block UDP), the browser falls back to HTTP/2 transparently. Users never see an error.

How HTTP/3 Negotiation Works

1. Browser connects via TCP + TLS (HTTP/2)
2. Server responds with: Alt-Svc: h3=":443"; ma=86400
3. Browser notes: "This server supports HTTP/3 on port 443"
4. Next request: Browser tries QUIC (UDP) to same server
5. If QUIC works → switch to HTTP/3
   If QUIC blocked → stay on HTTP/2

The Alt-Svc header's ma (max-age) value tells the browser how long to remember this preference. The browser races QUIC and TCP connections in parallel, using whichever succeeds first.

What HTTP/3 Keeps from HTTP/2

HTTP/3 isn't a complete rewrite of HTTP semantics. The HTTP layer is very similar to HTTP/2:

  • Same methods, headers, status codes — GET, POST, 200, 404 — all identical
  • Same binary framing concept — HEADERS and DATA frames, just encoded for QUIC
  • Multiplexed streams — same concept, but implemented at the QUIC transport layer
  • Header compression — QPACK replaces HPACK (QPACK handles out-of-order delivery)

The main difference is what happens below the HTTP layer. QUIC replaces TCP and integrates TLS. HTTP/3 frames ride on QUIC streams instead of TCP byte streams.

QPACK vs HPACK header compression

HTTP/2's HPACK compression requires headers to be decoded in order (because the dynamic table updates are sequential). This works on TCP's ordered byte stream. QUIC streams can arrive out of order, so HPACK would break. QPACK solves this by using a separate unidirectional stream for dynamic table updates. The encoder and decoder synchronize via this stream. QPACK achieves similar compression ratios to HPACK (80-90%) while being compatible with QUIC's out-of-order delivery.

Quiz
Why does HTTP/3 use QPACK instead of HPACK for header compression?

Performance Impact in Practice

The benefits of HTTP/3 are most visible on:

High-latency connections — the 1 RTT savings (vs 2 for HTTP/2) is most impactful when RTT is high. At 200ms RTT, saving 1 RTT saves 200ms on every new connection.

Lossy networks — mobile connections with 1-5% packet loss see the biggest improvement from independent stream multiplexing. No more TCP HOL blocking stalling all streams.

Connection transitions — mobile users moving between networks benefit from connection migration. No reconnection penalty.

Return visits — 0-RTT resumption means returning users get instant data transfer.

On low-latency, low-loss wired connections, the difference between HTTP/2 and HTTP/3 is minimal.

Common Mistakes

What developers doWhat they should do
Thinking HTTP/3 requires you to rewrite your application code
HTTP/3 changes how data moves over the wire, not what the data contains. Enabling HTTP/3 is a server/CDN configuration change, not an application code change. Your APIs, routes, and request/response formats stay exactly the same.
HTTP/3 is a transport-level change. Your HTTP semantics (methods, headers, status codes) are identical.
Assuming all networks support QUIC (UDP port 443)
QUIC runs on UDP, which some enterprise networks block as a security measure. Browsers handle this automatically (falling back to HTTP/2 over TCP), but you must keep HTTP/2 support active. Never make HTTP/3 the only option.
Some corporate firewalls and networks block UDP. HTTP/3 always needs an HTTP/2 fallback path.
Using 0-RTT for non-idempotent requests like POST with side effects
0-RTT data can be replayed by an attacker. If the replayed request is a money transfer or state mutation, it executes twice. Servers must reject 0-RTT for unsafe methods or implement robust replay detection.
Limit 0-RTT to GET and other safe, idempotent requests

Key Takeaways

Key Rules
  1. 1QUIC runs on UDP and provides independent stream multiplexing. A lost packet only blocks its own stream, eliminating TCP's head-of-line blocking.
  2. 2QUIC integrates TLS 1.3, reducing new connections to 1 RTT (vs 2 RTTs for TCP + TLS 1.3). Return visits can use 0-RTT.
  3. 3Connection migration uses connection IDs instead of IP addresses. Switching networks doesn't kill the connection.
  4. 4HTTP/3 negotiation is automatic via Alt-Svc headers. Browsers fall back to HTTP/2 if UDP is blocked.
  5. 5HTTP/3 benefits are most visible on high-latency, lossy, and mobile networks. On fast wired connections, the difference is minimal.