Solving “upstream sent too big header while reading response header from upstream” on Nginx

If you’ve used Nginx, you may have come across this error.

2019/04/12 06:43:24 [error] 11365#0:

*257 upstream sent too big header while reading response header from ups
tream,

client: 127.0.0.1,

server: hiroki-example.com,

request: “POST /payment HTTP/1.1”,

upstream: “fastcgi://127.0.0.1:9000”,

host: “hiroki-example.com”, referrer: “http://hiroki-example.com/payment”

I’ve seen this many times, but haven’t really tried to understand it deeply. In this post, I wrote about what this indicates and how to solve it.

The solution

First of all, look at the value of upstream in the error message.

If it indicates “fastcgi”, chances are that the client was sending a too big header to an app which was interacting with Nginx via FastCGI. (This is the case for the error message above.) The solution is to increase the buffer in the nginx.conf.

e.g.

http {
    fastcgi_buffers 16 16k; 
    fastcgi_buffer_size 32k;
}

If the app is using “proxy” in its location directive (i.e. The client is sending a request to a server which was proxied by Nginx. I’ll show an example later), you can increase the buffer for it.

e.g.

http {
    proxy_buffer_size 128k; 
    proxy_buffers 4 256k; 
    proxy_busy_buffers_size 256k;
}

Alternatively, you can disable it.

http {
    proxy_buffering off;
}

These solutions can be found here, but the posters didn’t really explain what was happening behind the scene. So, I wrote this article so that I can recall how Nginx works.

What is “upstream” in the first place?

According to the doc, upstream is a module which is used to define groups of servers that can be referenced by certain directives.

Those directives are the following 6.

  • proxy_pass
  • fastcgi_pass
  • uwsgi_pass
  • scgi_pass
  • memcached_pass
  • grpc_pass

At first, I didn’t understand anything from this explanation. My thought was like “Group of servers? But I’m using only one server (i.e. my local machine)!” It seems that Nginx deals with multiple servers. Why is that?

How does Nginx deal with multiple servers?

To understand why and how Nginx can interact with multiple servers, we first understand how Nginx receives requests from the client (and how Nginx sends back the resources to the client too).

Nginx is designed to deal with requests concurrently for multiple servers. In other words, Nginx proxies requests to servers.

https://www.booleanworld.com/configure-nginx-load-balancer/

As you can see from the diagram, Nginx (shown as “Load balancer”), receives requests from multiple clients, and pass the requests to appropriate servers.

Those servers are not limited to your app on a single server device. For example, you can pass requests to external websites with “proxy_pass”. Let’s assume that you’ve configured Nginx in the following way.

http{	
    server {
	listen 8888;

	location / {
		return 200 "Blah";
	}
	
	location /youtube {
		proxy_pass 'https://www.youtube.com/';
	}
    }
}

When you visit “http://localhost:8888/youtube”, you’ll be taken to www.youtube.com, even though you’re accessing localhost! Whereas if you visit “http://localhost:8888/”, your browser should just show “Blah”. This message is coming from the server which Nginx is actually running on.

Now we know that Nginx can pass information to multiple servers and pass the response to the client like a middle man. This model is called the reverse proxy, which is a type of the proxy server.

What is the Proxy server and why do we use it?

A proxy server is either a computer or a software system which exists between the client and the endpoint.

Proxy server
https://www.youtube.com/watch?v=qU0PVSJCKcs

Why do we use a proxy server? Because it can enhance security and performance. Specifically, proxy servers can…

  • Obscure client IP
  • Block malicious traffic (the client IP can’t be identified from the server)
  • Log activities
  • Improve performance by caching requests (Proxy can do so)

Ok, then what is the reverse proxy?

The reverse proxy is a type of proxy server that retrieves resources on behalf of a client from one or more servers… just like Nginx does!

These resources are then returned to the client as if they originated from the web server itself. (We saw this in the youtube example above)

Where is the upstream?

Ok, now I can see how Nginx passes requests from the client to the server (and passing resources from the server to the client).

Let’s review the error message. It says that…

  • upstream sent too big header while reading response header from upstream
  • upstream: “fastcgi://127.0.0.1:9000”

Now we know that upstream defines servers which are referenced by several directives, including fastcgi_pass. My app is using fastcgi_pass. Thus, this error is basically indicating that the client was sending a header which was too big for Nginx.

Leave a Reply

Your email address will not be published. Required fields are marked *