If I setup tomcat and stream a static file from it, I\'ve noticed that if the client \"pauses\" (stops receiving) from that socket for anything > 20s then tomcat appears to
The only thing that affected this "inactivity timeout" appears to be the
<Connector port="8080" ... connectionTimeout=30000 />
setting.
And only if it's trying to actively 'send data' onto the wire (but can't because the client is actively refusing it or if the connection has been lost). If the servlet is just busy doing cpu in the background, then writes to the wire (and it's received or buffered by the kernel), no problem, it can exceed the connectionTimeout, so it's not this.
My hunch is that Tomcat seems to have a "built in" (undocumented? not able to be specified separately?) write timeout setting, which defaults to connectionTimeout value, ex (from the tomcat source, randomly selected):
java/org/apache/tomcat/util/net/NioEndpoint.java
625: ka.setWriteTimeout(getConnectionTimeout());
Now whether this is "bad" or not is subject to interpretation. Running into this "severing" of the connection by tomcat occurs after either the TCP channel has been disrupted somehow (enough to stop transfer) or the client is "blocking" on receiving the bytes, FWIW...
FWIW connectionTimeout setting affects many things:
The total amount of time it takes to receive an HTTP GET request.
The total amount of time between receipt of TCP packets on a POST or PUT request.
The total amount of time between ACKs on transmissions of TCP packets in responses.
and now apparently also a writeTimeout.
End result: we had a flakey network so these are "expected" timeouts/severed connections (via a config with a different name LOL).