Skip to content
This repository was archived by the owner on Nov 27, 2019. It is now read-only.

First scraper tutorial (Ruby)

Liz Conlan edited this page Jun 11, 2013 · 1 revision

Write a real scraper by copying and pasting code, for programmers or non-programmers (30 minutes).

1. Make a new scraper

We’re going to scrape the average number of years children spend in school in different countries from this page, which was once on a UN site but has since been replaced with an Excel spreadsheet.

Go to the new ScraperWiki site and register. Create a dataset, choose the "Code in your browser" tool, then choose Ruby as the language. You’ll get a web based code editor.

Put in a few lines of code to show it runs, and click the “Run” button or type Ctrl+Return.

p "Hello, coding in the cloud!"

(As we go through this tutorial, you can copy and paste each block of code onto the end of your growing scraper, and run it each time.)

The code runs on ScraperWiki's servers. You can see any output you printed in the console at the bottom of the editor.

2. Download HTML from the web

You can use any normal Ruby library to crawl the web, such as Mechanize. There is also a simple built in ScraperWiki library which may be easier to use.

require scraperwiki
html = Scraperwiki.scrape("http://web.archive.org/web/20110514112442/http://unstats.un.org/unsd/demographic/products/socind/education.htm")
p html[0..499]

3. Parsing the HTML to get your content

Nokogiri is the best Ruby library for extracting content from HTML.

require 'scraperwiki'
require 'nokogiri'

html = ScraperWiki.scrape("http://web.archive.org/web/20110514112442/http://unstats.un.org/unsd/demographic/products/socind/education.htm")
doc = Nokogiri::HTML(html)

doc.search("div[@align='left'] tr[@class='tcont']").each do |row|
    cells = row.search('td/text()')
    data = {'country' => cells[0].text, 'years_in_school' => cells[4].text.to_i}
    p data
end

The bits of code like div and td are CSS selectors, just like those used to style HTML. Here we use them to select all the table rows. And then, for each of those rows, we select the individual cells, and if there are 12 of them (ie: we are in the main table body, rather than in one of the header rows), we extract the country name and schooling statistic.

4. Saving to the ScraperWiki datastore

The datastore is a magic SQL store, one where you don't need to make a schema up front.

Replace p data in the doc.search loop with this save command (unlike Python, code indention is not essential to make the code run properly, but is strongly recommended if you want to edit it again):

    ScraperWiki.save_sqlite(['country'], data)

The unique keys (just country in this case) identify each piece of data. When the scraper runs again, existing data with the same values for the unique keys is replaced.

Go to the "View in a table" tool to see the data loading in (you'll need to keep reloading the page). Notice that the code keeps running in the background, even when you're not in the "Code in your browser" tool. Wait until it has finished.

5. Getting the data out again

If you haven't done so yet, edit the title of your scraper.

Now, you can use other tools. For example choose "Download as spreadsheet".

For more complex queries "Query with SQL". Try this query in the SQL query box.

select * from swdata order by years_in_school desc limit 10

It gives you the records for the ten countries where children spend the most years at school.

What next?

If you have a scraper you want to write, and feel ready, then get going. Otherwise try the other tutorials.

Clone this wiki locally