☑ Chunky requests

3 Jul 2013 at 2:20PM in Software
 |  | 

Why have webservers been so slow to accept chunked requests?

cake slice

HTTP is, in general, a good protocol.

That is to say it’s not awful — in my experience of protocols, not being awful is a major achievement. Perhaps I’ve been unfairly biased by dealings in the past with the likes of SNMP, LDAP and almost any P2P file sharing protocol you can mention1, but it does seem like they’ve all got some major annoyance somewhere along the line.

As you may already be aware, the current version of HTTP is 1.1 and this has been in use almost ubiquitously for over a decade. One of the handy features that was introduced in 1.1 over the older version 1.0 was chunked encoding. If you’re already familiar with it, skip the next three paragraphs.

HTTP requests and responses consist of a set of headers, which define information about the request or response, and then optionally a body. In the case of a response, the body is fairly obviously the file being requested, which could be HTML, image data or anything else. In the case of a request, the body is often omitted for performing a simple GET to download a file, but when doing a POST or PUT to upload data then the body of the request typically contains the data being uploaded.

In HTTP, as in any protocol, the receiver of a message must be able to determine where the message ends. For a message with no body this is easy, as the headers follow a defined format and are terminated with a blank line. When a body is present, however, it can potentially contain any data so it’s not possible to specify a fixed terminator. Instead, it can be specified by adding a Content-Length header to the message — this indicates the number of bytes in the body, so when the receiving end has that many bytes of body data it knows the message is finished.

Sending a Content-Length isn’t always convenient, however — for example, many web pages these days are dynamically generated by server-side applications and hence the size of the response isn’t necessarily known in advance. It can be buffered up locally until it’s complete and then the size of it can be determined, a technique often called store and forward. However, this consumes additional memory on the sending side and increases the user-visible latency of the response by preventing a browser from fetching other resources referenced by the page in parallel with fetching the remainder of the page. As of HTTP/1.1, therefore, a piece-wise method of encoding data known as chunked encoding was added. In this scheme, body data is split into variable-sized chunks and each individual chunk has a short header indicating its size and then the data for that chunk. This means that only the size of each chunk need be known in advance and the sending side can use whatever chunk size is convenient2.

So, chunked encoding is great — well, as long as it’s supported by both ends, that is. If you look at §3.6.1 of the HTTP RFC, however, it’s mandatory to support it — the key phrase is:

All HTTP/1.1 applications MUST be able to receive and decode the “chunked” transfer-coding […]

So, it’s safe to assume that every client, server and library supports it, right? Well, not quite, as it turns out.

In general, support for chunked encoding of responses is pretty good. Of course, there will always be the odd homebrew library here and there that doesn’t even care about RFC-compliance, but the major HTTP clients, servers and libraries all do a reasonable job of it.

Chunk-encoded requests, on the other hand, are a totally different kettle of fish3. For reasons I’ve never quite understood, support for chunk-encoded requests has always been patchy, despite the fact there’s no reason at all that a POST or PUT request might feasibly be as large as any response — for example, when uploading a large file. Sure, there isn’t the same latency argument, but you still don’t want to force the client to buffer up the whole request before sending it just for the sake of lazy programmers.

For example, the popular nginx webserver didn’t support chunk-encoded requests in its core until release 1.3.9, a little more than seven months ago — admittedly there was a plugin to do it in earlier versions. Another example I came across recently was that Python’s httplib module doesn’t support chunked requests at all, even if the user does the chunking — this doesn’t seem to have changed in the latest version at time of writing. As it happens you can still do it yourself, as I recently explained to someone in a Stack Overflow answer, but you have to take care to make sure you don’t provide enough information for httplib to add its own Content-Length header — providing both that and chunked encoding is a no-no5, although the chunk lengths should take precedence according to the RFC.

What really puzzles me is how such a fundamental (and mandatory!) part of the RFC can have been ignored for requests for so long? It’s almost as if these people throw their software together based on real-world use-cases and not by poring endlessly over the intricate details of the standards documents and shunning any involvement with third party implementations. I mean, what’s all this “real world” nonsense? Frankly, I think it’s simply despicable.

But on a more serious note, while I can entirely understand how people might think this sort of thing isn’t too important (and don’t even get me started on the lack of proper support for “100 Continue”6), it makes it a really serious pain when you want to write properly robust code which won’t consume huge amounts of memory even when it doesn’t know the size of a request in advance. If it was a tricky feature I could understand it, but I don’t reckon it can take more than 20 minutes to support, including unit tests. Heck, that Stack Overflow answer I wrote contains a pretty complete implementation and that took me about 5, albeit lacking tests.

So please, the next time you’re working on a HTTP client library, just take a few minutes to implement chunked requests properly. Your coding soul will shine that little bit brighter for it. Now, about that “100 Continue” support… OK, I’d better not push my luck.


  1. The notable exception being BitTorrent which stands head and shoulders above its peers. Ahaha. Ahem. 

  2. Although sending data in chunks that are too small can cause excessive overhead as this blog post illustrates. 

  3. Trust me, you don’t want a kettle of fish on your other hand, especially if it’s just boiled. Come to think of it, who cooks fish in a kettle, anyway?4 

  4. Well, OK, I’m pedantic enough to note that kettle originally derives from ketill which is the Norse word for “cauldron” and it didn’t refer to the sort of closed vessel we now think of as a “kettle” when the phrase originated. I’m always spoiling my own fun. 

  5. See §4.4 of the HTTP RFC item 3. 

  6. Used at least by the Amazon S3 REST API

3 Jul 2013 at 2:20PM in Software
 |  |