How to crawl with php Goutte and Guzzle if data is loaded by Javascript? How to crawl with php Goutte and Guzzle if data is loaded by Javascript? php php

How to crawl with php Goutte and Guzzle if data is loaded by Javascript?


You want to have a look at phantomjs. There is this php implementation:

http://jonnnnyw.github.io/php-phantomjs/

if you need to have it working with php of course.

You could read the page and then feed the contents to Guzzle, in order to use the nice functions that Guzzle gives you (like search for contents, etc...). That would depend on your needs, maybe you can simply use the dom, like this:

How to get element by class name?

Here is some working code.

  $content = $this->getHeadlessReponse($url);  $this->crawler->addContent($this->getHeadlessReponse($url));  /**   * Get response using a headless browser (phantom in this case).   *   * @param $url   *   URL to fetch headless   *   * @return string   *   Response.   */public function getHeadlessReponse($url) {    // Fetch with phamtomjs    $phantomClient = PhantomClient::getInstance();    // and feed into the crawler.    $request = $phantomClient->getMessageFactory()->createRequest($url, 'GET');    /**     * @see JonnyW\PhantomJs\Http\Response     **/    $response = $phantomClient->getMessageFactory()->createResponse();    // Send the request    $phantomClient->send($request, $response);    if($response->getStatus() === 200) {        // Dump the requested page content        return $response->getContent();    }}

Only disadvantage of using phantom, it will be slower than guzzle, but of course, you have to wait for all those pesky js to be loaded.


Guzzle (which Goutte uses internally) is an HTTP client. As a result, javascript content will not be parsed or executed. Javascript files which reside outside of the requested endpoint will not be downloaded.

Depending upon your environment, I suppose it would be possible to utilize PHPv8 (a PHP extension that embeds the Google V8 javascript engine) and a custom handler / middleware to perform what you want.

Then again, depending on your environment, it might be easier to simply perform the scraping with a javascript client.


I would recommend to try getting response content. Parse it (if you have to) to new html and use it as $html when initialing new Crawler object, after that you can use all data in response like any other Crawler object.

$crawler = $client->submit($form);$html = $client->getResponse()->getContent();$newCrawler = new Crawler($html);