Scraping Web Sites which Dynamically Load Data

Preface

More and more sites are implementing dynamic updates of their contents. New items are added as the user scrolls down. Twitter is one of these sites. Twitter only displays a certain number of news items initially, loading additional ones on demand. How can sites with this behavior be scraped?

In the previous article we played with Google Chrome extensions to scrape a forum that depends on Javascript and XMLHttpRequest. Here we use the same technique for retrieving a specific number of news items based on a specific search. A list of additional alternatives is available in the Web Scraping Ajax and Javascript Sites article.

Code

Instructions

  1. Download the code from github
  2. Load the extension in Google Chrome: settings => extensions => check “developer mode” => load unpacked extension
  3. An “eye” icon now appears on the Google Chrome bar
  4. Go to the Twitter’s search page https://twitter.com/search-home and enter your search keywords
  5. Now press the “eye” and then the start button
  6. The scraping output is displayed on the console as JSON

Customization

  1. To modify the number of news items to be scraped open the file inject.js and change the scrollBottom(100); line by the number of items you would like (e.g: scrollBottom(200);)

Acknowledgments

This source code was written by Matias Palomera from Nektra Advanced Computing.

If you like this article, you might also be interested in

Further Reading

Google Search NoAPI

History

Way back in 2001 I wanted to be able to query Google automatically. Since Google did not provide an official API,  I developed a small simple Google Search “NoAPI” scraper  and published it as Googolplex. Google launched a SOAP based API but on December 20, 2006 they stopped accepting signups for the API1 and suspended it on August 31, 20092.  This shows that creating a service or product based on web APIs is a very risky business without an SLA contract. Google soon launched another API called Google Ajax Web Search API3 under a different license. This second API was suspended on November 1, 20104. You may wonder if Google is a bipolar creature. You can see the latest post at Fall Housekeeping.

Google has undergone a lot of changes since 2001 and Googolplex and other  libraries like xgoogle are now part of Internet history. A similar new library  is available at Mario Vilas Google Search Python blog post as Quickpost: Using Google Search from your Python code.

It’s not clear why Google vacilates over what could be an additional source of revenue, but it is clear that we should expect Google to provide an official and easy to use API. There are ways Google could restrict abuse of their APIs by third parties. It’s very common to offer a free alternative for low volume searches and charge for more intensive uses like Yahoo BOSS does.

In this article we’ll examine one way of crawling information in AJAX/Javascript based sites.

Crawling Google As A Browser

If you go to Google and look at the html source code you’ll be astonished to see pure Javascript obfuscated code. Even after searching the source is not clearer.

So, here is our code to get Google’s results using htmlunit/jython,we don’t have any affiliation with them,jwejust like it!). Look at our Web Scraping Ajax and Javascript Sites for more information.

google.py

import com.gargoylesoftware.htmlunit.WebClient as WebClient
import com.gargoylesoftware.htmlunit.BrowserVersion as BrowserVersion

def query(q):
   webclient = WebClient(BrowserVersion.FIREFOX_3_6)
   url = "http://www.google.com"
   page = webclient.getPage(url)

   query_input = page.getByXPath("//input[@name='q']")[0]
   query_input.text = q
   search_button = page.getByXPath("//input[@name='btnG']")[0]
   page = search_button.click()
   results = page.getByXPath("//ol[@id='rso']/li//span/h3[@class='r']")

   c = 0
   for result in results:
      title = result.asText()
      href = result.getByXPath("./a")[0].getAttributes().getNamedItem("href").nodeValue
      print title, href
      c += 1

   print c,"Results"

if __name__ == '__main__':
   query("google web search api")

run.sh

/opt/jython/jython -J-classpath "htmlunit-2.8/lib/*" google.py

Alternatives

The following search engines provide official APIs for search:

Homework

  1. Write a clean function/class to do Google queries and handle exceptions.
  2. Modify the function to handle nested and paged results
  3. Modify the function again, this time to include descriptions.

Final Notes

The approach taken by Mario Vilas is more API like, our approach here is a defensive measure against NoAPIs. This is another good example where HtmlUnit does its job.

BTW the noapi.com domain is available5

See Also

  1. Extraction of Main Text Content Using the Google Reader NoAPI
  2. The Data Portability Fact Sheet

References

  1. Beyond the SOAP Search API
  2. A well earned retirement for the SOAP Search API
  3. Google AJAX Search API beta Version 1.0 Available
  4. Fall Housekeeping
  5. The noapi.com domain is available at the time of writing of this article. Register it now! (Disclaimer: affiliate link).

Additional Resources

  1. Google Search API?
  2. Google Deprecates Their SOAP Search API
  3. Google Search API Dropped
  4. Is this API going to be closed down?
  5. Yahoo BOSS Switching To Paid Model In Early 2011
  6. Thoughts on Yahoo! BOSS Monetization Announcement
  7. Google to Start Charging for Prediction API
  8. Update on Whitelisting (Twitter API policies discussion)
  9. From “Businesses” To “Tools”: The Twitter API ToS Changes

Web Scraping Ajax and Javascript Sites

Introduction

Most crawling frameworks used for scraping cannot be used for Javascript or Ajax. Their scope is limited to those sites that show their main content without using scripting. One would also be tempted to connect a specific crawler to a Javascript engine but it’s not easy to do. You need a fully functional browser with good DOM support because the browser behavior is too complex for a simple connection between a crawler and a Javascript engine to work. There is a list of resources at the end of this article to explore the alternatives in more depth.

There are several ways to scrape a site that contains Javascript:

  1. Embed a web browser within an application and simulate a normal user.
  2. Remotely connect to a web browser and automate it from a scripting language.
  3. Use special purpose add-ons to automate the browser
  4. Use a framework/library to simulate a complete browser.

Each one of these alternatives has its pros and cons. For  example using a complete browser consumes a lot of resources, especially if we need to scrape websites with a lot of pages.

In this post we’ll give a simple example of how to scrape a web site that uses Javascript. We will use the htmlunit library to simulate a browser. Since htmlunit runs on a JVM we will use Jython, an [excellent] programming language,which is a Python implementation in the JVM. The resulting code is very clear and focuses on solving the problem instead of on the aspects of programming languages.

Setting up the environment

Prerequisites

  1. JRE or JDK.
  2. Download the latest version of Jython from http://www.jython.org/downloads.html.
  3. Run the .jar file and install it in your preferred directory (e.g: /opt/jython).
  4. Download the htmlunit compiled binaries from: http://sourceforge.net/projects/htmlunit/files/.
  5. Unzip the htmlunit to your preferred directory.

Crawling example

We will scrape the Gartner Magic Quadrant pages at: http://www.gartner.com/it/products/mq/mq_ms.jsp . If you look at the list of documents, the links are Javascript code instead of hyperlinks with http urls. This is may be to reduce crawling, or just to open a popup window. It’s a very convenient page to illustrate the solution.

gartner.py

import com.gargoylesoftware.htmlunit.WebClient as WebClient
import com.gargoylesoftware.htmlunit.BrowserVersion as BrowserVersion

def main():
   webclient = WebClient(BrowserVersion.FIREFOX_3_6) # creating a new webclient object.
   url = "http://www.gartner.com/it/products/mq/mq_ms.jsp"
   page = webclient.getPage(url) # getting the url
   articles = page.getByXPath("//table[@id='mqtable']//tr/td/a") # getting all the hyperlinks

   for article in articles:
      print "Clicking on:", article
      subpage = article.click() # click on the article link
      title = subpage.getByXPath("//div[@class='title']") # get title
      summary = subpage.getByXPath("//div[@class='summary']") # get summary
      if len(title) > 0 and len(summary) > 0:
         print "Title:", title[0].asText()
         print "Summary:", summary[0].asText()
#     break

if __name__ == '__main__':
   main()

run.sh

/opt/jython/jython -J-classpath "htmlunit-2.8/lib/*" gartner.py

Final notes

This article is just a starting point to move ahead of simple crawlers and point the way for further research. As this is a simple page, it is a good choice for a clear example of how Javascript scraping works.You must do your homework to learn to crawl more web pages or add multithreading for better performance. In a demanding crawling scenario a lot of things must be taken into account, but this is a subject for future articles.

If you want to be polite don’t forget to read the robots.txt file before crawling…

If you like this article, you might also be interested in

  1. Distributed Scraping With Multiple Tor Circuits
  2. Precise Scraping with Google Chrome
  3. Running Your Own Anonymous Rotating Proxies
  4. Automated Browserless OAuth Authentication for Twitter

Resources

  1. HtmlUnit
  2. ghost.py is a webkit web client written in python
  3. Crowbar web scraping environment
  4. Google Chrome remote debugging shell from Python
  5. Selenium web application testing systemWatirSahiWindmill Testing Framework
  6. Internet Explorer automation
  7. jSSh Javascript Shell Server for Mozilla
  8. http://trac.webkit.org/wiki/QtWebKit
  9. Embedding Gecko
  10. Opera Dragonfly
  11. PyAuto: Python Interface to Chromum’s automation framework
  12. Related questions on Stack Overflow
  13. Scrapy
  14. EnvJS: Simulated browser environment written in Javascript
  15. Setting up Headless XServer and CutyCapt on Ubuntu
  16. CutyCapt: Capture WebKit’s rendering of a web page.
  17. Google webmaste blog: A spider’s view of Web 2.0
  18. OpenQA
  19. Python Webkit DOM Bindings
  20. Berkelium Browser
  21. uBrowser
  22. Using HtmlUnit on .NET for Headless Browser Automation (using IKVM)
  23. Zombie.js
  24. PhantomJS
  25. PyPhantomJS
  26. CasperJS
  27. Web Inspector Remote
  28. Offscreen/Headless Mozilla Firefox (via @brutuscat)
  29. Web Scraping with Google Spreadsheets and XPath
  30. Web Scraping with YQL and Yahoo Pipes

Photo taken by xiffy