Skip to content

Raise default HTTP/2 receive windows and batch HTTP/2 receive-window refills#481

Open
ericmj wants to merge 4 commits intomainfrom
ericmj/http2-larger-default-windows
Open

Raise default HTTP/2 receive windows and batch HTTP/2 receive-window refills#481
ericmj wants to merge 4 commits intomainfrom
ericmj/http2-larger-default-windows

Conversation

@ericmj
Copy link
Copy Markdown
Member

@ericmj ericmj commented Apr 13, 2026

Builds on top of #480, merge and then rebase in the correct order.

Raise default HTTP/2 receive windows

Default connection receive window is now 16 MB (was 65_535, the RFC
7540 §6.9.2 minimum), sent via a WINDOW_UPDATE on stream 0 as part of
the connection preface. Default stream receive window is now 4 MB (was
65_535), advertised via SETTINGS_INITIAL_WINDOW_SIZE in the same
preface. Both settable via the new :connection_window_size option
and the existing :client_settings option.

Window size / RTT sets a hard cap on per-stream throughput. At the
previous 65_535-byte stream window:

Path (typical RTT) 65 KB 4 MB 16 MB
LAN (1 ms) 62 MB/s 4 GB/s 16 GB/s
Region (20 ms) 3.1 MB/s 200 MB/s 800 MB/s
Cross-country (70 ms) 0.9 MB/s 57 MB/s 229 MB/s
Transatlantic (100 ms) 0.6 MB/s 40 MB/s 160 MB/s
Transpacific (130 ms) 0.5 MB/s 31 MB/s 123 MB/s
Antipodal (230 ms) 0.3 MB/s 17 MB/s 70 MB/s

Any caller talking to a server more than a few milliseconds away was
bottlenecked well below their link bandwidth without knowing why. 4 MB
per stream saturates gigabit anywhere on earth; 16 MB at the connection
level lets four streams run in parallel at full rate before the shared
pool binds.

For comparison, Go's net/http2 uses 1 GB / 4 MB (conn/stream) and gun
uses 8 MB / 8 MB. 16 MB / 4 MB is roughly in the same family, with the
ratio chosen so conn is not the bottleneck for typical parallel use.

Callers who want the old behavior can pass connection_window_size: 65_535 and client_settings: [initial_window_size: 65_535] to
connect/4.

Batch HTTP/2 receive-window refills

Previously refill_client_windows/3 sent a WINDOW_UPDATE on both the
connection and the stream after every DATA frame, with the increment
set to the frame's byte size. That kept the advertised window pinned
at its peak but tied outbound WINDOW_UPDATE traffic one-to-one with
inbound DATA frames.

An adversarial server can exploit that ratio. By sending many small
DATA frames — in the limit, one byte of body per frame — it can force
the client to emit one 13-byte WINDOW_UPDATE per frame. At high frame
rates that's a small but real client-side amplification: a flood of
outbound control frames driven entirely by the peer.

This change gates refills on a threshold. The client tracks the
current remaining window for the connection and each stream and only
sends a WINDOW_UPDATE once that remaining drops to
:window_update_threshold bytes. The update then tops the window
straight back up to its configured peak. One frame per
receive_window_size - window_update_threshold bytes consumed, not
per DATA frame. The default threshold is 160_000 bytes, matching
gun's connection_window_update_threshold — roughly 10× the default
16 KB max frame size, leaving the server a safety margin before the
window would starve it.

Behaviour-wise:

  • With the new 4 MB / 16 MB default windows, the client sends
    roughly one stream-level WINDOW_UPDATE per ~3.84 MB consumed
    (previously ~250 per 4 MB), and one connection-level update per
    ~15.84 MB (previously ~1000 per 16 MB).
  • Callers that explicitly set the stream or connection window down
    to the 65_535 spec minimum get the old behaviour — one refill per
    frame — because remaining is always below the default 160_000
    threshold.

The threshold is tunable via the new :window_update_threshold
option to Mint.HTTP.connect/4.

Closes #432.

ericmj added 4 commits April 12, 2026 15:47
Advertises a larger HTTP/2 receive window (connection-level or
per-stream) by sending a WINDOW_UPDATE frame. Needed because RFC 7540
makes the connection-level initial window tunable only via
WINDOW_UPDATE — not SETTINGS — leaving the spec default of 64 KB as
the only reachable value without an API like this.

In hex's `mix deps.get` — many parallel multi-MB tarball downloads
sharing one HTTP/2 connection — raising the connection window from
64 KB to 8 MB via this function drops 10 runs from 32.7s to 29.2s
(10.8%), matching their HTTP/1 pool.

Deliberately asymmetric with get_window_size/2 (which returns the
client *send* window). Docstrings on both carry warning callouts
spelling out send-vs-receive so callers don't assume they round-trip.

Target is :connection or {:request, ref}; grow-only (shrink attempts
return {:error, conn, %HTTPError{reason: :window_size_too_small}});
new_size validated against 1..2^31-1. Tracks the advertised peak on
new receive_window_size fields on the connection and stream.
The connection and stream structs tracked a `window_size` field for
the client's outbound (send) window and a separately-named
`receive_window_size` field for the inbound window. Renaming the
former to `send_window_size` makes the pair symmetric and removes a
long-standing source of confusion about which direction a bare
`window_size` refers to.
Default connection receive window is now 16 MB (was 65_535), sent via
a WINDOW_UPDATE on stream 0 as part of the connection preface. Default
stream receive window is now 4 MB (was 65_535), advertised via
SETTINGS_INITIAL_WINDOW_SIZE in the same preface. Both settable via
the new `:connection_window_size` option and the existing
`:client_settings` option.

Window size / RTT sets a hard cap on per-stream throughput. At the
previous 65_535-byte stream window:

  Path (typical RTT)       | 65 KB    | 4 MB     | 16 MB
  -------------------------|----------|----------|----------
  LAN (1 ms)               | 62 MB/s  | 4 GB/s   | 16 GB/s
  Region (20 ms)           | 3.1 MB/s | 200 MB/s | 800 MB/s
  Cross-country (70 ms)    | 0.9 MB/s | 57 MB/s  | 229 MB/s
  Transatlantic (100 ms)   | 0.6 MB/s | 40 MB/s  | 160 MB/s
  Transpacific (130 ms)    | 0.5 MB/s | 31 MB/s  | 123 MB/s
  Antipodal (230 ms)       | 0.3 MB/s | 17 MB/s  | 70 MB/s

Any caller talking to a server more than a few milliseconds away was
bottlenecked well below their link bandwidth without knowing why. 4 MB
per stream saturates gigabit anywhere on earth; 16 MB at the connection
level lets four streams run in parallel at full rate before the shared
pool binds.

Callers who want the old behaviour can pass `connection_window_size:
65_535` and `client_settings: [initial_window_size: 65_535]` to
`connect/4`.
Previously `refill_client_windows/3` sent a WINDOW_UPDATE on both the
connection and the stream after every DATA frame, with the increment
set to the frame's byte size. That kept the advertised window pinned
at its peak but tied outbound WINDOW_UPDATE traffic one-to-one with
inbound DATA frames.

An adversarial server can exploit that ratio. By sending many small
DATA frames — in the limit, one byte of body per frame — it can force
the client to emit one 13-byte WINDOW_UPDATE per frame. At high frame
rates that's a small but real client-side amplification: a flood of
outbound control frames driven entirely by the peer.

This change gates refills on a threshold. The client tracks the
current remaining window for the connection and each stream and only
sends a WINDOW_UPDATE once that remaining drops to
`:receive_window_update_threshold` bytes. The update then tops the
window straight back up to its configured peak. One frame per
`receive_window_size - receive_window_update_threshold` bytes
consumed, not per DATA frame. The default threshold is 160_000 bytes
— roughly 10× the default 16 KB max frame size, leaving the server a
safety margin before the window would starve it.

Behaviour-wise:

  * With the 4 MB / 16 MB default windows, the client sends roughly
    one stream-level WINDOW_UPDATE per ~3.84 MB consumed (previously
    ~250 per 4 MB), and one connection-level update per ~15.84 MB
    (previously ~1000 per 16 MB).
  * Callers that explicitly set the stream or connection window down
    to the 65_535 spec minimum get the old behaviour — one refill per
    frame — because remaining is always below the default 160_000
    threshold.

The threshold is tunable via the new `:receive_window_update_threshold`
option to `Mint.HTTP.connect/4`.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Avoid unnecessary WINDOW_UPDATE frames in HTTP/2 client

1 participant