I need to download several files with wget
and measure download speed.
e.g. I download with
wget -O /dev/null http://ftp.bit.nl/pub/Open
Why can't you just do this:
perl -ne "/^Downloaded.*?\((.*?)\)/; print $1"
here's suggestion. You can make use of wget's
--limit-rate=amount
option. For example,
--limit-rate=400k
will limit the retrieval rate to 400KB/s. Then its easier for you to
calculate the total speed. Saves you time and mental anguish trying to regex it.
Update, a grep-style version using sed:
wget ... 2>&1 | sed -n '$,$s/.*(\(.*\)).*/\1/p'
Old version:
I thought, it's easier to divide the file size by the download time after the download. ;-)
(/usr/bin/time -p wget ... 2>&1 >/dev/null; ls -l newfile) | \
awk '
NR==1 {t=$2};
NR==4 {printf("rate=%f bytes/second\n", $5/t)}
'
The first awk line stores the elapsed real time of "real xx.xx" in variabe t
. The second awk line divides the file size (column 5 of ls -l
) by the time and outputs this as the rate.
This works when only 1 file is being downloaded.
I started using sed
to get the speed from wget, but I found it irritating so I switched to grep.
This is my command:
wget ... 2>&1 | grep -o "[0-9.]\+ [KM]*B/s"
The -o
option means it only returns that part. It matches 1 or more of the 10 digits then a space. Then optionally K
or M
before the B/s
That will return 423 KB/s
(for example).
To grep for just the units, use grep -o "[KM]*B/s"
and for just the number use grep -o "[0123456789]\+
.
This worked for me, using your wget -O /dev/null <resource>
The regex I used was \([0-9.]\+ [KM]B/s\)
But note I had to redirect stderr
onto stdout
so the command was:
wget -O /dev/null http://example.com/index.html 2>&1 | grep '\([0-9.]\+ [KM]B/s\)'
This allows things like 923 KB/s
and 1.4 MB/s
grep
just finds matches. To get the value(s) you can use sed
instead:
wget -O /dev/null http://example.com/index.html 2>&1 |
sed -e 's|^.*(\([0-9.]\+ [KM]B/s\)).*$|\1|'