Integrating Google Analytics into your Company Loop with a Microsoft Excel Add-on

 

Introduction

Google Analytics and AdWords are essential marketing and sales tools. They can be integrated with the ubiquitous Microsoft Excel with the Google Data API. Data Big Bang’s Nicolas Papagna has developed an Excel add-on which can be downloaded here. This plugin enables Excel users to quickly retrieve Google Analytics data using the available Google Analytics metrics, and dimensions, and may also be sorted by the user’s criteria. One of the advantages of our solution is that Excel accesses the Google Analytics API directly instead of accessing it thru Data Big Bang server. Other solutions need access to your information which this exposes your private data to third parties.

Installation and Usage

  1. Download GoogleAnalyticsToExcel.AddInSetup_1.0.20.0.exe.
  2. Install it.
  3. Run Microsoft Excel.
  4. Configure your Google credentials by clicking on “Settings” under the “Google Analytics to Excel Addin” ribbon tab.
  5. Customize your query and retrieve your Google Analytics data by clicking “Query Google Analytics” button.

Development Notes

Data Big Bang’s research team has also developed an OData web service that can be consumed using applications such as PowerPivot, Tableau and LINQPad. This web service doesn’t require any add-ons. However, since unfortunately neither PowerPivot nor Tableau offer query builders to interact with OData providers, users must know how to craft the OData URL query themselves. The most interesting part of this project was developing a Google Data Protocol to Open Data Protocol .NET class that offers an IQueryable interface to convert LINQ queries to GData. LINQ queries add a lot of expressive power beyond GData.

See Also

  1. Automated Discovery of Blog Feeds and Twitter, Facebook, LinkedIn Accounts Connected to Business Website
  2. Integrating Dropbox with Microsoft Outlook
  3. Exporting StackOverflow users blogs to Excel Hyperlinks

Ideas and Execution Magic Chart

Ideas vs Execution

There is an endless discussion in the startup community about the value of ideas versus the importance of execution. Here is a timeline showing Hacker News community submissions with the idea(s) keyword in the title:

I am no prophet, but I believe the future will most likely lean towards ideas because the cost of creating and operating a web company has been dramatically reduced. Soon marketing and sales services will be more affordable, making it easier to resolve the business puzzle. On the other hand, although following Joseph Schumpeter’s thinking, big companies have an advantage because they have more resources, they often prefer to follow the acquisition route after market natural selection instead of building risky projects from scratch. Entrepreneurs benefit from reduced competition in the initial phase of product development.

Magic Chart

This is an exercise, you must be objective to fill in your chart, and dabble in the black art of time estimation. The idea of the magic chart is to fill in a scatter plot chart. The x axis shows the time you expect it to take to execute the idea (you can limit it to development time first), and the y axis the potential of the idea. You can easily add other dimensions like cost, to the graph by using the size of the point plotted or colors. Add a vertical asymptote to the chart at the outside time limit which is feasible for you.

Here is my magic chart:

 

As you see it’s difficult to came up with ideas which can be executed in a short time and many of the ideas fall on an uncertainty beyond some time point. If you think that having a minimum viable product is key, then you must think very hard about how to reduce your product execution time, and this is an art more than a science. The need to generate profit is a serious constraint. Your idea may be excellent and your software may be used by millions of people, but you may lack a business model.

How is your ideas execution magic chart landscape?

HNSearch Script

Here is the Python script for retrieving Hacker News posts with the words idea and ideas in the title. It includes a legal hack (what else?) to bypass the ThriftDB’s HNSearch API imposed limit of 1000 items.

#!/usr/bin/python
# -*- coding: utf-8 -*-

# Done under Visual Studio 2010 using the excelent Python Tools for Visual Studio http://pytools.codeplex.com/

import urllib2
import json
from datetime import datetime
from time import mktime
import csv
import codecs
import cStringIO

class CSVUnicodeWriter: # http://docs.python.org/library/csv.html
    """
    A CSV writer which will write rows to CSV file "f",
    which is encoded in the given encoding.
    """

    def __init__(self, f, dialect=csv.excel, encoding="utf-8", **kwds):
        # Redirect output to a queue
        self.queue = cStringIO.StringIO()
        self.writer = csv.writer(self.queue, dialect=dialect, **kwds)
        self.stream = f
        self.encoder = codecs.getincrementalencoder(encoding)()

    def writerow(self, row):
        self.writer.writerow([s.encode("utf-8") for s in row])
        # Fetch UTF-8 output from the queue ...
        data = self.queue.getvalue()
        data = data.decode("utf-8")
        # ... and reencode it into the target encoding
        data = self.encoder.encode(data)
        # write to the target stream
        self.stream.write(data)
        # empty queue
        self.queue.truncate(0)

    def writerows(self, rows):
        for row in rows:
            self.writerow(row)

def get_hackernews_articles_with_idea_in_the_title():
    endpoint = 'http://api.thriftdb.com/api.hnsearch.com/items/_search?filter[fields][title]=idea&start={0}&limit={1}&sortby=map(ms(create_ts),{2},{3},4294967295000)%20asc'

    incomplete_iso_8601_format = '%Y-%m-%dT%H:%M:%SZ'

    items = {}
    start = 0
    limit = 100
    begin_range = 0
    end_range = 0

    url = endpoint.format(start, limit, begin_range, str(int(end_range)))
    response = urllib2.urlopen(url).read()
    data = json.loads(response)

    prev_timestamp = datetime.fromtimestamp(0)

    results = data['results']

    while results:
        for e in data['results']:
            _id = e['item']['id']
            title = e['item']['title']
            points = e['item']['points']
            num_comments = e['item']['num_comments']
            timestamp = datetime.strptime(e['item']['create_ts'], incomplete_iso_8601_format)

            #if timestamp < prev_timestamp: # The results are not correctly sorted. We can't rely on this one.             if _id in items: # If the circle is complete.                 return items             prev_timestamp = timestamp                      items[_id] = {'id':_id, 'title':title, 'points':points, 'num_comments':num_comments, 'timestamp':timestamp}             title_utf8 = title.encode('utf-8')             print title_utf8, timestamp, _id, points, num_comments         start += len(results)         if start + limit > 1000:
            start = 0
            end_range = mktime(timestamp.timetuple())*1000

        url = endpoint.format(start, limit, begin_range, str(int(end_range))) # if not str(int(x)) then a float gives in the sci math form: '1.24267528e+12'
        response = urllib2.urlopen(url).read()
        data = json.loads(response)
        results = data['results']

    return items

if __name__ == '__main__':
    items = get_hackernews_articles_with_idea_in_the_title()

    with open('hn-articles.csv', 'wb') as f:
        hn_articles = CSVUnicodeWriter(f)

        hn_articles.writerow(['ID', 'Timestamp', 'Title', 'Points', '# Comments'])

        for k,e in items.items():
            hn_articles.writerow([str(e['id']), str(e['timestamp']), e['title'], str(e['points']), str(e['num_comments'])])

# It returns 3706 articles where the query says that they are 3711... find the bug...

 

Resources

  1. Are Ideas Getting Harder to Find? (2016)
  2. Science as Art
  3. Thinking Skills Instruction: Concepts and Techniques (Anthology)
  4. De Bono’s Lateral Thinking
  5. TRIZ
  6. Schumpeter’s Creative Destruction: A Review of the Evidence
  7. Google Query: “ideas vs execution” OR “execution vs ideas”
  8. Google Query: site:news.ycombinator.com AND (intitle:idea OR intitle:ideas)
  9. Startup Ideas We’d Like to Fund
  10. My list of ideas, if you’re looking for inspiration by Jacques Mattheij
  11. Startup Ideas We’d Like to Fund by Paul Graham.
  12. Ideas don’t make you rich. The correct execution of ideas does excerpt from Felix Dennis book.
  13. Ideas suck by Chris Prescott.
  14. Execution Matters, Ideas Don’t by Fred Wilson.
  15. What Is Twitter’s Problem? No, It’s Not the Product
  16. 1000 results limit? (HNSearch NoAPI limits, bonus hack included in this article).
  17. Year 2038 problem
  18. How to use time > year 2038 on official Windows Python 2.5
  19. Solr FunctionQuery
  20. HackerNews Ideas Articles
  21. Execution Is An Order Of Magnitude Easier Than Opportunity

Extraction of Main Text Content Using the Google Reader NoAPI

Theo van Doesburg Dadamatinée

Introduction

In this article we will see how to extract the main text content from a blog using the Google Reader NoAPI.

Extracting the main text content from a web page is an important step in the text processing pipeline. The source code of pages in HTML is usually cluttered with advertising and other text which is not related to the main content. Formally, in the context of computer science, it is impossible for a computer to distinguish between the main content and other content on the same page. That is, no algorithm can recognize it for all possible cases. Sometimes it is even difficult for humans to distinguish it. Recognition of primary content is part of the machine learning/artificial intelligence field of study.

In practice there are many ways to recognize main content. If, for example, a blog platform includes attributes which indicate where the main content is, the process will be straightforward. Similarly, If the pages on a particular site have a well defined structure, we can also infer where the main content is by sampling a few pages. In this approach, we train the recognizer to apply patterns to additional pages. Of course purely manual work is another option. The quickest way to build an army of human recognizers is to put the job on sites like Amazon’s Mechanical Turk or similar services such as Microworkers.

For a good compilation of resources related to this subject you can see:

Extracting the Main Content from a Blog

If the blog platform includes information about the main text content on their tags, making an XPath expression for each one will do the trick. Now imagine that you want to do it automatically, without depending on each blog platform or blog theme. In this case you can read the RSS feed, which generally only includes main text, and extract the text from there. However, not all blogs post the complete text in the feed. The TechCrunch feed, for example, shows the first part of the text, but you have to click to continue reading. In this case you can use the partial text from the feed to recognize the complete text in the HTML. A potential problem with reading RSS feeds is that they only contain the most recent articles. To get around this limitation, we can get a longer feed history from Google Reader. Google Reader has some gaps and misses some articles, but this issue is beyond the scope of this article.

Getting Blog Text from Google Reader

Since Google Reader does not have a real API we will rely on the Google Reader API lib by Mauro Asprea from Wish and BAM!. He is an active reader of this blog and a friend.

We will retrieve posts by Fred Wilson, one of the most prolific VC bloggers, since he has blogged since 9/23/2003 on an almost daily basis, and includes the whole post within the feed.

Python code

#!/usr/bin/python
# *-* coding: utf-8 *-*

import sys
import time
from GoogleReader import  CONST
from GoogleReader.reader import GoogleReader
import lxml.html

USERNAME = '' # Replace with your Google Reader username
PASSWORD = '' # Replace with your Google Reader password. Not included in this post :-)

gr = GoogleReader()
login_info = (USERNAME, PASSWORD)
gr.identify(*login_info)
gr.login()

gr.add_subscription(url="http://feeds.feedburner.com/avc")
xmlfeed = gr.get_feed(url="http://feeds.feedburner.com/avc")

COUNT = 1000
i=0

print >>sys.stderr, "page:", i
for entry in xmlfeed.get_entries():
   print entry['title'].encode('utf-8'), time.ctime(entry['published'])
   doc = lxml.html.fromstring(entry['content']) # Thanks lxml.html for handling incomplete HTML documents!
   print doc.text_content().encode('utf-8')
   print "******************************************************************************************************"

continuation = xmlfeed.get_continuation()

i+=1
while continuation != None and i < COUNT:
   print >>sys.stderr, "page:", i
   xmlfeed = gr.get_feed(url="http://feeds.feedburner.com/avc", continuation = continuation)

   for entry in xmlfeed.get_entries():
      print entry['title'].encode('utf-8'), time.ctime(entry['published'])
      try:
         doc = lxml.html.fromstring(entry['content']) # Thanks lxml.html for handling incomplete HTML documents!
         print doc.text_content().encode('utf-8')
      except:
         print "------------------ ERROR -------------------"
         print entry['content']

      print "******************************************************************************************************"

   continuation = xmlfeed.get_continuation()
   i+=1

Notes

If you try this script you will realize that the oldest post retrieved is from 9/29/2005. The real first post however was on 9/23/2003. Why don’t we see it? I believe it is because Google Reader uses feed information from FeedBurner, which was launched in 2004 and acquired by Google in 2007, so they probably started recording feed entries then. Incidentally Union Square Ventures was one of the original FeedBurner investors.

There is an easier way to retrieve text in the specific case of Fred Wilson’s blog and other HTML5 modern sites. HTML5 provides an <article> tag, so you can just crawl the whole site and retrieve the content within the <article> tag. You’ll need an extra step to deduplicate the content since many of the crawled pages will appear more than once. For example if you follow categories like MBA Mondays you will find articles that also appear when you follow another path.

Lessons Learned

  • We can use Google Reader to easily extract text content from blogs.
  • Google Reader has its limitations: it doesn’t cover posts before a certain data and sometimes skips posts.
  • HTML5 adds a valuable new tag for differentiating article text from the rest of the content.

See Also

  1. Voice Recognition + Content Extraction + TTS = Innovative Web Browsing
  2. Google Search NoAPI

Additional Resources

  1. Newspaper: News, full-text, and article metadata extraction in Python 3
  2. boilerpipe: Boilerplate Removal and Fulltext Extraction from HTML pages
  3. Readability API
  4. HTML Content Extraction Questions on StackOverflow
  5. Google Reader Development Questions on StackOverflow

The Data Portability Fact Sheet

Introduction

Parallego has been announced on TechCrunch after a stealth period as the latest social network that will challenge Facebook and Google Plus. Their investors include big names like Sequoia Capital, Andreessen Horowitz and Union Square Ventures, and they have top angels like Ron Conway. They really love developers, so they offer an API to show their commitment to openness.

Parallego doesn’t really exist, but announcements like this are part of startup breaking news about the web and entrepreneurship. These companies emphasize their love for developers and claim to be open because they provide APIs. The truth is that when you test their APIs you usually find a number of problems:

  1. You can read the information but cannot write or modify it.
  2. You have access to certain information but other information is unavailable.
  3. The rate of API calls is low, so you can only make a few calls and must wait a certain period of time to continue.
  4. You cannot make parallel requests in a multiprocess or multithreaded application.
  5. There is no way to quickly pay for the service and access a better service. Google API Console is a step in that direction but a lot of important Google NoAPIs are unavailable.
  6. Some OAuth2 protocol implementation does not work with the existing development libraries.
  7. The service says it welcomes new applications, but this is not the case for new UIs and mobile clients. See Twitter to Devs: Don’t Make Twitter Clients… Or Else [mashable.com]
  8. You cannot even export your own information. The time you have spent adding content to this service is lost once you leave it.
  9. There is no love for developers: the forums are filled with questions and there are no official answers. See Rate limit with billing enabled [google.com] and Graph API rate limit? [facebook.com]
  10. The company often changes its policies. The web mashup that you did seven months ago that attracted thousands of users is useless because the new API revision does not give you the data that you need for some specific features. See Should facebook pay compensation for deprecated API calls and changes [facebook.com]
  11. Old content is removed without warning.

After a while, you begin to doubt, close your eyes and rethink again about the word “Open”. It seems somewhat meaningless. If you are older you may remember that Microsoft was accused of being closed, but you may also remember that in the worst case you could reverse engineer and access all the internals yourself. You need advanced knowledge of tools like IDA Pro, OllyDbg, and WinDbg of course, but it was possible. You can’t reverse engineer the cloud, however you can scrape the information, but this is time consuming both in terms of development and running time.

And while “Open” is repeated in every announcement from high profile web companies, your brain does not register the word anymore just like you do not see any of the ads on Google because your brain made has made its own AdBlock extension.

Data Portability Classification

For all of the above reasons we think the best initiative towards transparency is adding a fact sheet to every service so we can compare them and know how “open” they really are. WikiMatrix is a good example of how comparisons could be made.

Marco Paol from DBB has been informally collecting information about some web services and has put it in a public spreadsheet on Data Portability Comparison

Please feel free to send us clarifications, suggestions, and fixes.

Resources

  1. Open Data and Linked Data [wikipedia.org]
  2. DataPortability project [wikipedia.org]
  3. Small data [smalldata.org]
  4. The open data manual [opendatamanual.org]
  5. Is It Open Data?
  6. Open Data mailing lists [okfn.org]
  7. Synaptic/Web
  8. Open Knowledge Foundation Blog
  9. The Friend of a Friend (FOAF) project
  10. theinfo.org: Community for Getting, Processing, and Visualizing Large Data Sets
  11. Plagiarism Today
  12. PeopleBrowsr’s case against Twitter heads back to state court after federal court ruling
  13. Archive Team archivists

Language Identification for Text Mining and NLP

The Tower of Babel and ships in a large marine landscape.

Introduction

Language Identification is a key task in the text mining process. Successful analysis of extracted text with natural language processing or machine learning training requires a good language identification algorithm. If it fails to recognize the language, this error will nullify subsequent  processes. NLP algorithms must be adjusted for different corpuses and according to the grammar of different languages. Certain NLP software is best suited to certain languages. For example NLTK is the most popular natural language processing package for English under Python, but as FreeLing is best for Spanish. The efficiency of language processing depends on many factors.

A very high level model for text analysis includes the following tasks:

Text Extraction
Text can be extracted by: scraping a web site, importing it in a specific format, getting it from a database, or accessing it via an API.

Text Identification
Text identification is a process which can separate interesting text from other content or format that adds noise to the analysis. For example a blog can include advertising, menus, and other information besides the main content.

NLP
NLP is a set of algorithms to aid in the processing of different languages. See links to NLP software packages and articles here.

Machine Learning
Machine learning is a necessary step for tasks such as collaborative filtering, sentiment analysis and clustering.

Software Alternatives

There is a lot of language identification software available on the web. NLTK uses Crúbadán, while Gate includes TextCat. At Data Big Bang, we like to use Google Language API because it is very accurate even for just one word. It also includes an accuracy measure in the response.

Sadly, Google has deprecated the Google Language API Family and we have added them to our “Google NoAPI” list. They can be used until they are shut down.

Example Including an API Key

Google highly recommends including an API key with the API request. You can get one at http://code.google.com/apis/loader/signup.html or with the new Google API Console https://code.google.com/apis/console/. Use it as follows:

language-identification.py

#!/usr/bin/python

# Language Detection using Google Language API: http://code.google.com/apis/language/translate/v2/getting_started.html
# It can handle unicode texts. You need to add your exception/errors catching.
import sys
import urllib
import urlparse
import simplejson

ENDPOINT = "https://www.googleapis.com/language/translate/v2/detect"
KEY = "" # Insert your key here. Get it from: https://code.google.com/apis/console/

def detect_language(text):
   utf8_encoded_text = text.encode('utf-8')
   query_field = urllib.urlencode({'key':KEY, 'q':utf8_encoded_text})
   parsed_url = urlparse.urlparse(ENDPOINT)
   url = urlparse.urlunparse((parsed_url[0], parsed_url[1], parsed_url[2], parsed_url[3], query_field, parsed_url[5]))

   data = simplejson.loads(urllib.urlopen(url).read())
   response = data['data']['detections'][0][0]

   return response # it answers: {'isReliable': , 'confidence': , 'language': }

if __name__ == '__main__':
   terminal_encoding = sys.stdin.encoding
   text = raw_input("Text? ")
   unicode_text = text.decode(terminal_encoding)
   response = detect_language(unicode_text)

   print response

Google Language API for language identification is very easy to use and was very permissive in terms of usage limitation but now the rate limit status can be found in the console.

Benchmarking 

Different language identification algorithms can be easily benchmarked against the Google’s. Testing with single words and small sentences is a good indicator, especially if the algorithms will be used for services like twitter where the sentences are very short.

Resources

  1. Google Scholar search on language identification
  2. Google language detection
  3. Lingua Identify for Perl
  4. A language detection library for Java
  5. Language identification addition for NLTK
  6. Sentiment analysis and language processing tools
  7. Balie language identification
  8. Gate
  9. NLTK
  10. FreeLing
  11. TextCat and TextCat under Gate
  12. LingPipe

Scraping vs Antiscraping

Introduction

It’s not possible to jump into the subject of scrapers without confronting antiscraping techniques.  The reverse is also true: if you want to develop good antiscraping techniques you must think like a scraper developer. Similarly, real hackers needs knowledge of security technologies while a good security system benefits from simulated attacks. This kind of “game dynamics” also applies to security algorithms. For example one of the best known public encryption algorithms, RSA, was invented by Ron Rivest, Adi Shamir and Leonard Adleman. Ron and Adi invented new algorithms and Adelman was in charge of breaking them. They eventually came up with RSA1.

Antiscraping Measures and How to Pass Them

A preliminary chart:

Antiscraping techniques Scraping techniques
The site only enables crawling by a known search engine bot. The scraper can access the search engine cache.
The site doesn’t allow the same IP to access a lot of pages in a short period of time. Use Tor, a set of proxies, or a crawling service like 80legs.
The site shows a captcha if it’s crawled in a massive way. Use anti-captcha techniques or services like Mechanical Turk where real people can give the answer. Another alternative is to listen to the captcha and use voice recognition with noise.
The site uses javascript. Use a javascript enabled crawler.

Many antiscraping measures are annoying for visitors. For example if you’re a “search engine junkie” you’ll find pretty quickly that Google shows you a captcha thinking that you are a bot.

Digression

I believe the web should follow a MVC (Model View Controller) type pattern where you can access the data (the model) independently of how you interact with it. This would enable stronger connections between different sites. Linked Data is one of such initiative, but there are others. Data Portability and APIs are a step towards this pattern, but when you are using APIs from large sites you realize that they’ve put a lot of limits. Starting a whole business based on third party APIs is very risky. You only have to look at the past to see a lot of changes on API features and policies. Facebook, Google and Twitter are good examples. API providers are afraid of losing control of their sites and the profits they generate. We need new business models which can get around this problem and benefit both API providers and consumers. In this sense should be created new business models not only based on advertising. One common approach is to charge for the use of the API. There are other models like that followed by the Guardian, which distribute their ads via their API. APIs carrying advertising is a promising concept. We hope that more creative people will came up with new models for a better MVC web.

See Also

  1. Running Your Own Anonymous Rotating Proxies
  2. Distributed Scraping With Multiple Tor Circuits

References

  1. Leonard Adleman Interview

Further reading

  1. Captcha Recognition
  2. OCR Research Team
  3. Data Scraping with YQL and jQuery
  4. API Conference
  5. Google Calls Out Facebook’s Data Hypocrisy, Blocks Gmail Import
  6. Google Search NoAPI
  7. Kayak Search API is no longer supported
  8. The Guardian Open Platform
  9. Twitter Slashes API Rate Limits In Half Across The Board To Deal With Capacity Issues
  10. Facebook, you are doing it wrong
  11. Cubeduel Goes Viral Too Quickly, Stumbles Over LinkedIn API Limits
  12. Keyword Exchange Market
  13. A Union for Mechanical Turk Workers?
  14. The Long Tail Of Business Models
  15. Scraping, cleaning, and selling big data
  16. Detecting ‘stealth’ web-crawlers

Photo: Glykais Gyula fencing against Oreste Puliti. [Source]

Google Search NoAPI

History

Way back in 2001 I wanted to be able to query Google automatically. Since Google did not provide an official API,  I developed a small simple Google Search “NoAPI” scraper  and published it as Googolplex. Google launched a SOAP based API but on December 20, 2006 they stopped accepting signups for the API1 and suspended it on August 31, 20092.  This shows that creating a service or product based on web APIs is a very risky business without an SLA contract. Google soon launched another API called Google Ajax Web Search API3 under a different license. This second API was suspended on November 1, 20104. You may wonder if Google is a bipolar creature. You can see the latest post at Fall Housekeeping.

Google has undergone a lot of changes since 2001 and Googolplex and other  libraries like xgoogle are now part of Internet history. A similar new library  is available at Mario Vilas Google Search Python blog post as Quickpost: Using Google Search from your Python code.

It’s not clear why Google vacilates over what could be an additional source of revenue, but it is clear that we should expect Google to provide an official and easy to use API. There are ways Google could restrict abuse of their APIs by third parties. It’s very common to offer a free alternative for low volume searches and charge for more intensive uses like Yahoo BOSS does.

In this article we’ll examine one way of crawling information in AJAX/Javascript based sites.

Crawling Google As A Browser

If you go to Google and look at the html source code you’ll be astonished to see pure Javascript obfuscated code. Even after searching the source is not clearer.

So, here is our code to get Google’s results using htmlunit/jython,we don’t have any affiliation with them,jwejust like it!). Look at our Web Scraping Ajax and Javascript Sites for more information.

google.py

import com.gargoylesoftware.htmlunit.WebClient as WebClient
import com.gargoylesoftware.htmlunit.BrowserVersion as BrowserVersion

def query(q):
   webclient = WebClient(BrowserVersion.FIREFOX_3_6)
   url = "http://www.google.com"
   page = webclient.getPage(url)

   query_input = page.getByXPath("//input[@name='q']")[0]
   query_input.text = q
   search_button = page.getByXPath("//input[@name='btnG']")[0]
   page = search_button.click()
   results = page.getByXPath("//ol[@id='rso']/li//span/h3[@class='r']")

   c = 0
   for result in results:
      title = result.asText()
      href = result.getByXPath("./a")[0].getAttributes().getNamedItem("href").nodeValue
      print title, href
      c += 1

   print c,"Results"

if __name__ == '__main__':
   query("google web search api")

run.sh

/opt/jython/jython -J-classpath "htmlunit-2.8/lib/*" google.py

Alternatives

The following search engines provide official APIs for search:

Homework

  1. Write a clean function/class to do Google queries and handle exceptions.
  2. Modify the function to handle nested and paged results
  3. Modify the function again, this time to include descriptions.

Final Notes

The approach taken by Mario Vilas is more API like, our approach here is a defensive measure against NoAPIs. This is another good example where HtmlUnit does its job.

BTW the noapi.com domain is available5

See Also

  1. Extraction of Main Text Content Using the Google Reader NoAPI
  2. The Data Portability Fact Sheet

References

  1. Beyond the SOAP Search API
  2. A well earned retirement for the SOAP Search API
  3. Google AJAX Search API beta Version 1.0 Available
  4. Fall Housekeeping
  5. The noapi.com domain is available at the time of writing of this article. Register it now! (Disclaimer: affiliate link).

Additional Resources

  1. Google Search API?
  2. Google Deprecates Their SOAP Search API
  3. Google Search API Dropped
  4. Is this API going to be closed down?
  5. Yahoo BOSS Switching To Paid Model In Early 2011
  6. Thoughts on Yahoo! BOSS Monetization Announcement
  7. Google to Start Charging for Prediction API
  8. Update on Whitelisting (Twitter API policies discussion)
  9. From “Businesses” To “Tools”: The Twitter API ToS Changes