I have a command-line PHP script that runs a wget request using each member of an array with foreach. This wget request can sometimes take a long time so I want to be able t
Try using the wget command line argument --timeout in addition to set_time_limit().
Keep in mind set_time_limit(15) restarts the timeout counter from zero so don't call it inside a loop (for your purpose)
from man wget:
--timeout=seconds
Set the network timeout to seconds seconds. This is equivalent to specifying --dns-timeout, --connect-timeout, and --read-timeout, all at the same time.
When interacting with the network, Wget can check for timeout and abort the operation if it takes too long. This prevents anomalies like hanging reads and infinite connects. The only timeout enabled by default is a 900-second read timeout. Setting a timeout to 0 disables it altogether. Unless you know what you are doing, it is best not to change the default time-out settings.
All timeout-related options accept decimal values, as well as subsecond values. For example, 0.1 seconds is a legal (though unwise) choice of timeout. Subsecond timeouts are useful for checking server response times or for testing network latency.
EDIT: OK. I see what you're doing now. What you should probably do is use proc_open instead of system, and use the time()function to check the time, calling proc_terminate if wget tskes too long.