Unbounded decompression chain in HTTP responses on Node.js Fetch API via Content-Encoding leads to resource exhaustion

Disclosed: 2026-02-23 16:02:30 By illia-v To nodejs
Unknown
Vulnerability Details
**Summary:** Unbounded number of links in the decompression chain for HTTP responses in Node.js Fetch API **Description:** Fetch API supports chained HTTP encoding algorithms for response content according to RFC 9110 (e.g., `Content-Encoding: gzip, br`). However, the number of links in the decompression chain is unbounded and the default `maxHeaderSize` allows a malicious server to insert thousands compression steps leading to high CPU usage and excessive memory allocation. ## Steps To Reproduce: Run scripts from attachments: 1.`BYTES=50000 LAYERS=5000 node server.js` to serve 50 KB of raw data compressed with Brotli 5000 times 2. `node client.js` to make a request which will decompress the response content automatically The following output shows how a response with modest 60892 bytes of compressed data can load the requesting machine for minutes. Increasing the number of bytes and the Brotli level is expected to make the resource exhaustion even more severe. ``` $ BYTES=50000 LAYERS=5000 node server.js Brotli stack server listening on http://localhost:3000/ (layers=5000, raw bytes=50000) Raw content sample: Uint8Array(5) [ 4, 68, 220, 90, 43 ] Served / (raw bytes=50000, encoded bytes=60892, layers=5000) ``` ``` $ /usr/bin/time -v node client.js Status: 200 Content-Encoding chain links: 5000 Decoded body sample: Uint8Array(5) [ 4, 68, 220, 90, 43 ] Command being timed: "node client.js" User time (seconds): 248.31 System time (seconds): 229.12 Percent of CPU this job got: 186% Elapsed (wall clock) time (h:mm:ss or m:ss): 4:15.58 ... Maximum resident set size (kbytes): 712332 ... Voluntary context switches: 31856426 Involuntary context switches: 997 ``` ## Impact: CWE-770 Allocation of Resources Without Limits or Throttling ## Supporting Material/References: Similar issues for urllib3 and curl: https://github.com/advisories/GHSA-gm62-xv2j-4w53 https://curl.se/docs/CVE-2022-32206.html Both libraries hard coded the maximum allowed number of encodings for a response to 5. I disclose that I was the reporter and the remediation developer for the urllib3 issue. I would like to work on the patch for Node.js to qualify for [a patch reward](https://bughunters.google.com/open-source-security/patch-rewards) from Google potentially. Here's the initial patch which can be supplemented with unit tests if the report is accepted: ```diff index a0dd75df7a5..2e5edef764d 100644 --- a/deps/undici/src/lib/web/fetch/index.js +++ b/deps/undici/src/lib/web/fetch/index.js @@ -2128,6 +2128,10 @@ async function httpNetworkFetch ( // "All content-coding values are case-insensitive..." /** @type {string[]} */ const codings = contentEncoding ? contentEncoding.toLowerCase().split(',') : [] + if (codings.length > 5) { + reject(new Error(`too many content-encodings ${codings.length}, maximum allowed is 5.`)) + return true + } for (let i = codings.length - 1; i >= 0; --i) { const coding = codings[i].trim() // https://www.rfc-editor.org/rfc/rfc9112.html#section-7.2 ```
Actions
View on HackerOne
Report Stats
  • Report ID: 3456148
  • State: Closed
  • Substate: resolved
  • Upvotes: 2
Share this report