When writing data to a web server, my tests show HttpWebRequest.ReadWriteTimeout is ignored, contrary to the MSDN spec. For example if I set ReadWriteTimeout to 1 (=1 msec
There appears to be a bug where the write timeout, when set on the Stream instance returned to you by BeginGetRequestStream(), is not propagated down to the native socket. I will be filing a bug to make sure this issue is corrected for a future release of the .NET Framework.
Here is a workaround.
private static void SetRequestStreamWriteTimeout(Stream requestStream, int timeout)
{
// Work around a framework bug where the request stream write timeout doesn't make it
// to the socket. The "m_Chunked" field indicates we are performing chunked reads. Since
// this stream is being used for writes, the value of this field is irrelevant except
// that setting it to true causes the Eof property on the ConnectStream object to evaluate
// to false. The code responsible for setting the socket option short-circuits when it
// sees Eof is true, and does not set the flag. If Eof is false, the write timeout
// propagates to the native socket correctly.
if (!s_requestStreamWriteTimeoutWorkaroundFailed)
{
try
{
Type connectStreamType = requestStream.GetType();
FieldInfo fieldInfo = connectStreamType.GetField("m_Chunked", BindingFlags.NonPublic | BindingFlags.Instance);
fieldInfo.SetValue(requestStream, true);
}
catch (Exception)
{
s_requestStreamWriteTimeoutWorkaroundFailed = true;
}
}
requestStream.WriteTimeout = timeout;
}
private static bool s_requestStreamWriteTimeoutWorkaroundFailed;