Web scraping in PHP Web scraping in PHP curl curl

Web scraping in PHP


I recommend you consider simple_html_dom for this. It will make it very easy.

Here is a working example of how to pull the title, and first image.

<?phprequire 'simple_html_dom.php';$html = file_get_html('http://www.google.com/');$title = $html->find('title', 0);$image = $html->find('img', 0);echo $title->plaintext."<br>\n";echo $image->src;?>

Here is a second example that will do the same without an external library. I should note that using regex on HTML is NOT a good idea.

<?php$data = file_get_contents('http://www.google.com/');preg_match('/<title>([^<]+)<\/title>/i', $data, $matches);$title = $matches[1];preg_match('/<img[^>]*src=[\'"]([^\'"]+)[\'"][^>]*>/i', $data, $matches);$img = $matches[1];echo $title."<br>\n";echo $img;?>


You may use either of these libraries. As you know each one has pros & cons, so you may consult notes about each one or take time & try it on your own:

  • Guzzle: An Independent HTTP client, so no need to depend on cURL, SOAP or REST.
  • Goutte: Built on Guzzle & some of Symfony components by Symfony developer.
  • hQuery: A fast scraper with caching capabilities. high performance on scraping large docs.
  • Requests: Famous for its user-friendly usage.
  • Buzz: A lightweight client, ideal for beginners.
  • ReactPHP: Async scraper, with comprehensive tutorials & examples.

You'd better check them all & use everyone in its best intended occasion.


This question is fairly old but still ranks very highly on Google Search results for web scraping tools in PHP. Web scraping in PHP has advanced considerably in the intervening years since the question was asked. I actively maintain the Ultimate Web Scraper Toolkit, which hasn't been mentioned yet but predates many of the other tools listed here except for Simple HTML DOM.

The toolkit includes TagFilter, which I actually prefer over other parsing options because it uses a state engine to process HTML with a continuous streaming tokenizer for precise data extraction.

To answer the original question of, "Is there any simple way to do this without any external libraries/classes?" The answer is no. HTML is rather complex and there's nothing built into PHP that's particularly suitable for the task. You really need a reusable library to parse generic HTML correctly and consistently. Plus you'll find plenty of uses for such a library.

Also, a really good web scraper toolkit will have three major, highly-polished components/capabilities:

  1. Data retrieval. This is making a HTTP(S) request to a server and pulling down data. A good web scraping library will also allow for large binary data blobs to be written directly to disk as they come down off the network instead of loading the whole thing into RAM. The ability to do dynamic form extraction and submission is also very handy. A really good library will let you fine-tune every aspect of each request to each server as well as look at the raw data it sent and received on the wire. Some web servers are extremely picky about input, so being able to accurately replicate a browser is handy.

  2. Data extraction. This is finding pieces of content inside retrieved HTML and pulling it out, usually to store it into a database for future lookups. A good web scraping library will also be able to correctly parse any semi-valid HTML thrown at it, including Microsoft Word HTML and ASP.NET output where odd things show up like a single HTML tag that spans several lines. The ability to easily extract all the data from poorly designed, complex, classless tags like ASP.NET HTML table elements that some overpaid government employees made is also very nice to have (i.e. the extraction tool has more than just a DOM or CSS3-style selection engine available). Also, in your case, the ability to early-terminate both the data retrieval and data extraction after reading in 50KB or as soon as you find what you are looking for is a plus, which could be useful if someone submits a URL to a 500MB file.

  3. Data manipulation. This is the inverse of #2. A really good library will be able to modify the input HTML document several times without negatively impacting performance. When would you want to do this? Sanitizing user-submitted HTML, transforming content for a newsletter or sending other email, downloading content for offline viewing, or preparing content for transport to another service that's finicky about input (e.g. sending to Apple News or Amazon Alexa). The ability to create a custom HTML-style template language is also a nice bonus.

Obviously, Ultimate Web Scraper Toolkit does all of the above...and more:

I also like my toolkit because it comes with a WebSocket client class, which makes scraping WebSocket content easier. I've had to do that a couple of times.

It was also relatively simple to turn the clients on their heads and make WebServer and WebSocketServer classes. You know you've got a good library when you can turn the client into a server....but then I went and made PHP App Server with those classes. I think it's becoming a monster!