How to use Goutte

萝らか妹 提交于 2019-12-03 06:11:59

The documentation you want to look at is the Symfony2 DomCrawler.

Goutte is a client build on top of Guzzle that returns Crawlers every time you request/submit something:

use Goutte\Client;
$client = new Client();
$crawler = $client->request('GET', 'http://www.symfony-project.org/');

With this crawler you can do stuff like get all the P tags inside the body:

$nodeValues = $crawler->filter('body > p')->each(function (Crawler $node, $i) {
    return $node->text();
});
print_r($nodeValues);

Fill and submit forms:

$form = $crawler->selectButton('sign in')->form(); 
$crawler = $client->submit($form, array(
        'username' => 'username', 
        'password' => 'xxxxxx'
));

A selectButton() method is available on the Crawler which returns another Crawler that matches a button (input[type=submit], input[type=image], or a button) with the given text. [1]

You click on links or set options, select check-boxes and more, see Form and Link support.

To get data from the crawler use the html or text methods

echo $crawler->html();
echo $crawler->text();

After much trial and error I have discovered that there is a much easier, well documented, better assitance (if needed) and much more effective scraper than goutte. If you are having issues with goutte try the following:

  1. Simple HTML Dom: http://simplehtmldom.sourceforge.net/

If you are in the same situation as I was where the page you are trying to scrape requires a referrer from their own website then you can use a combination of CURL and Simple HTML DOM because it does not appear that Simple HTML DOM has the ability to send a referrer. If you do not need a referrer then you can use Simple HTML DOM to scrape the page.

$url="http://www.example.com/sub-page-needs-referer/";
$referer="http://www.example.com/";
$html=new simple_html_dom(); // Create a new object for SIMPLE HTML DOM
/** cURL Initialization  **/
$ch = curl_init($url);

/** Set the cURL options **/
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_REFERER,$referer);
$output = curl_exec($ch);

if($output === FALSE) {
  echo "cURL Error: ".curl_error($ch); // do something here if we couldn't scrape the page
}
else {
  $info = curl_getinfo($ch);
  echo "Took ".$info['total_time']." seconds for url: ".$info['url'];
  $html->load($output); // Transfer CURL to SIMPLE HTML DOM
}

/** Free up cURL **/
curl_close($ch);

// Do something with SIMPLE HTML DOM.  It is well documented and very easy to use.  They have a lot of examples.
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!