I\'m working on writing a program to download very large files (~2GB) from a server. I\'ve written the program to be able to resume partially finished downloads,
In
See http://thushw.blogspot.com/2010/10/java-urlconnection-provides-no-fail.html for code to handle this situation
Edited: actually, setting a Socket timeout (in milliseconds) using setSoTimeout (as suggested in the link comment from Joop Eggen) is probably better.
Have you .setReadTimeout(int timeout) on your URLConnection?
-- EDIT
See answer from @DNA for a neat solution:
in short words you can spawn a parallel thread that .disconnect()s the URLConnection (after letting your second thread sleep for timeout milliseconds), thus triggering an IOException that'll get you out of the stalled read.
The only reliable way I found is to instantiate Socket as an InterruptibleChannel, and do an interrupt on a stuck IO thread. (BTW, you don't have to use asynchronous NIO calls with InterruptibleChannels, blocking I/O works fine, you just have a really nice and uniform way of kicking the stuck exchanges)
Though, it looks like URLConnection does not allow you to hook up a custom Socket factory.
Maybe you should investigate HttpClient from Apache.
EDIT
Here is how you create Interruptible Socket.
import java.net.InetSocketAddress;
import java.net.Socket;
import java.net.SocketAddress;
import java.nio.channels.SocketChannel;
final SocketAddress remoteAddr =
new InetSocketAddress(
serverAddress,
servicePort
);
final SocketChannel socketChannel = SocketChannel.open( );
socketChannel.connect( remoteAddr );
// Here java.io.Socket is obtained
Socket socket = socketChannel.socket( );
I don't have HttpClient sample, but I know that you can customize socket initialization.