Scraping Web Sites which Dynamically Load Data

Preface

More and more sites are implementing dynamic updates of their contents. New items are added as the user scrolls down. Twitter is one of these sites. Twitter only displays a certain number of news items initially, loading additional ones on demand. How can sites with this behavior be scraped?

In the previous article we played with Google Chrome extensions to scrape a forum that depends on Javascript and XMLHttpRequest. Here we use the same technique for retrieving a specific number of news items based on a specific search. A list of additional alternatives is available in the Web Scraping Ajax and Javascript Sites article.

Code

Instructions

  1. Download the code from github
  2. Load the extension in Google Chrome: settings => extensions => check “developer mode” => load unpacked extension
  3. An “eye” icon now appears on the Google Chrome bar
  4. Go to the Twitter’s search page https://twitter.com/search-home and enter your search keywords
  5. Now press the “eye” and then the start button
  6. The scraping output is displayed on the console as JSON

Customization

  1. To modify the number of news items to be scraped open the file inject.js and change the scrollBottom(100); line by the number of items you would like (e.g: scrollBottom(200);)

Acknowledgments

This source code was written by Matias Palomera from Nektra Advanced Computing.

If you like this article, you might also be interested in

Further Reading

  • whatever

    Perhaps an example of the havoc dynamic content plays on browsers, I notice that however Disqus loads comments into Chrome, somehow it makes it impossible/difficult to search for text contained in the comment using Chrome’s Ctrl-F search function.

    The search function will show the search string exists, but is unable to position the browser to show the string.

    At least that was the case prior to last week’s disqus mods. Darn, it seems to be working now at least here.

  • Boomy

    Awesome post, thank you!

    What would be the best way to extract xpaths of elements of interests in other websites (such as the “//p[@class=’js-tweet-text tweet-text’]”)?

  • H.a.w.k P.h.i.l

    I would just use phantom.js and casper.js to keep things simple

  • seoelixir

    what a intersting post it is….thanks for sharing!
    http://homesteadroad.com

  • George

    I just read your post and I installed the extension but when I entered a word in twitter search and clicked the “Start” button it didn’t scrape the twitter search results. How to fix this?

    • Matias Palomera

      You clicked the start button before or after searching that word?

      • George

        Well, I tried both but to no avail

        • Matias Palomera

          Press F12 and go to “console”. What kind of error shows?

  • Hey there. Great posts! I’d love to get your feedback (good or bad) on a tool we are currently building – a visual data extractor specifically made for dynamic sites. http://www.parsehub.com. It’s a work in progress and any feedback will be much appreciated. :)

  • Dre Peters

    CaperJS is the way to go. Forget about Chrome.

  • John Frey

    The scraper which I used for extracting data is just similar like this. But when I used the amazon product scraper I found it was simple and nice that what the data scraper also.

    For More Info : http://www.youtube.com/watch?v=lNkz8Cu5ORA

  • Abhay

    I am very new to this web scraping world…I am python programmer. Able to scrape STATIC web pages using beautifulsoup. But I want to know how to parse dynamically loaded web pages in python (beautifulsoup only loads view source code data).
    Any help is appreciated.

    • Jatin Grover

      use selenium with bs4

  • alison white

    guyuy

  • Hamman Samuel

    I tried this but there’s no JSON output being sent to the console. Any fix?

  • Alyk

    nice way of scraping dynamically load data. thanks for your tips!!!

  • alex

    I am a car dealer and i am also researching private ads at the FB marketplace and then resell it, or repair and resell… e-scraper .com awesome on-demand web scraping service which helped me in my case.
    Maybe it helps somebody too.