I have the code:
#!/usr/bin/perl
use strict;
use WWW::Mechanize;
my $url = \'http://divxsubtitles.net/page_subtitleinformation.php?ID=111292\';
my $m = WWW:
I tried your code and it returns a stack of HTML of which the only http:// references were:
http://www.w3c.org
http://ad.z5x.net
http://divxsubtitles.net
http://feeds2read.net
http://ad.z5x.net
http://www.google-analytics.com
http://cls.assoc-amazon.com
using the code
my $content = $m->response->content();
while ( $content =~ m{(http://[^/\" \t\n\r]+)}g ) {
print( "$1\n" );
}
So my comments to you are:
1. add use strict; to your code, you are programming for failure if you don't
2. read the output HTML and determine what to do next, you haven't done that, and therefore you've asked an incomplete question. Unless you identify the URL you want to download you are asking somebody else to write a program for you.
Once you've identified the URL you want to download it is a simple matter of getting it and then writing the response content to a file. e.g.
if ( ! open( FOUT, ">output.bin" ) ) {
die( "Could not create file: $!" );
}
binmode( FOUT ); # required for Windows
print( FOUT $m->response->content() );
close( FOUT );