www-mechanize

Force www. and https in nginx.conf (SSL)

孤街醉人 提交于 2020-06-09 12:08:43
问题 After purchasing a SSL certificate I have been trying to force all pages to secured https and to www. https://www.exampl.com is working and secure but only if type it in exactly. www.example.com or example.com are still pointing to http. We use nginx as a proxy and need to input the rewrite there. I have SSH / root access via Putty. I have accessed nginx.conf by inputting into putty. Now what? Do I input the nginx commands on this page? Starting where the cursor is? Any command lines first?

Perl, how to avoid diagnostic messages from not-directly included modules?

五迷三道 提交于 2020-01-25 08:28:05
问题 I'm getting this warning (after "use diagnostics;"); Parsing of undecoded UTF-8 will give garbage when decoding entities at /usr/lib/perl5/HTML/PullParser.pm line 81. My program is like this: ... use diagnostics; use WWW::Mechanize; use WWW::Mechanize::Gzip; ... $m = WWW::Mechanize::GZip->new( agent => $self->{_agent}, timeout => $self->{_timeout}, ); if (!$m->get($url)) { die("Impossibile scaricare l'url [$url]"); } if (!$m->form_number(1)) { die("Impossibile trovare il form 1"); } <WARNING

Why can't WWW::Mechanize find the right form?

可紊 提交于 2020-01-25 05:39:26
问题 I'm using WWW::Mechanize to retrieve a form from a webpage: #!/usr/bin/perl use WWW::Mechanize; my $mechanize = WWW::Mechanize->new(); $mechanize->proxy(['http', 'ftp'], 'http://proxy/'); $mechanize->get("http://www.temp.com/"); $mechanize->form_id('signin'); The website HTML has code as follows <form action="https://www.temp.com/session" id="signin" method="post"> but I get the error There is no form with ID "signin" at SiteScraper.pl What do I do? 回答1: Without knowing exactly could be wrong

Visit Half Million Pages with Perl

ぃ、小莉子 提交于 2020-01-13 05:20:07
问题 Currently I'm using Mechanize and the get() method to get each site, and check with content() method each mainpage for something. I have a very fast computer + 10Mbit connection, and still, it took 9 hours to check 11K sites, which is not acceptable, the problem is, the speed of the get() function, which , obviously, needs to get the page,is there any way to make it faster,maybe to disable something,as I only need to main page html to be checked. Thanks, 回答1: Make queries in parallel instead

Why does WWW::Mechanize GET certain pages but not others?

断了今生、忘了曾经 提交于 2020-01-04 04:21:11
问题 I'm new to Perl/HTML things. I'm trying to use $mech->get($url) to get something from a periodic table on http://en.wikipedia.org/wiki/Periodic_table but it kept returning error message like this: Error GETing http://en.wikipedia.org/wiki/Periodic_table: Forbidden at PeriodicTable.pl line 13 But $mech->get($url) works fine if $url is http://search.cpan.org/. Any help will be much appreciated! Here is my code: #!/usr/bin/perl -w use strict; use warnings; use WWW::Mechanize; use HTML:

How can I download a file using WWW::Mechanize or any Perl module?

泄露秘密 提交于 2020-01-02 12:38:52
问题 Is there a way in WWW::Mechanize or any Perl module to read on a file after accessing a website. For example, I clicked a button 'Receive' , and a file (.txt) will appear containing a message. How will I be able to read the content? Answers are very much appreciated.. I've been working on this for days,, Also, I tried all the possibilities. Can anyone help? If you can give me an idea please? :) Here is a part of my code: ... my $username = "admin";<br> my $password = "12345";<br> my $url =

How can I add a progress bar to WWW::Mechanize?

我的未来我决定 提交于 2020-01-02 02:41:05
问题 I have the following code: $mech->get($someurl, ":content_file" => "$i.flv"); So I'm getting the contents of a url and saving it as an flv file. I'd like to print out every second or so how much of the download is remaining. Is there any way to accomplish this in WWW::Mechanize? 回答1: WWW::Mechanize says that the get method is a "well-behaved" overload of LWP::UserAgent get . Looking at the docs for LWP::UserAgent, you can provide a content_cb key which is called with each chunk of the

Is there a PHP equivalent of Perl's WWW::Mechanize?

巧了我就是萌 提交于 2019-12-27 10:46:31
问题 I'm looking for a library that has functionality similar to Perl's WWW::Mechanize, but for PHP. Basically, it should allow me to submit HTTP GET and POST requests with a simple syntax, and then parse the resulting page and return in a simple format all forms and their fields, along with all links on the page. I know about CURL, but it's a little too barebones, and the syntax is pretty ugly (tons of curl_foo($curl_handle, ...) statements Clarification: I want something more high-level than the