scrapers – ScraperWiki https://blog.scraperwiki.com Extract tables from PDFs and scrape the web Tue, 09 Aug 2016 06:10:13 +0000 en-US hourly 1 https://wordpress.org/?v=4.6 58264007 WordPress Titles: scraping with search url https://blog.scraperwiki.com/2013/03/wordpress-titles-scraping-with-search-url/ https://blog.scraperwiki.com/2013/03/wordpress-titles-scraping-with-search-url/#comments Mon, 11 Mar 2013 17:21:29 +0000 http://blog.scraperwiki.com/?p=758218112 search-resultsI’ve blogged for a few years now, and I’ve used several tools along the way. zachbeauvais.com began as a Drupal site, until I worked out that it’s a bit overkill, and switched to WordPress. Recently, I’ve been toying with the idea of using a static site generator (a lá Jekyll or Hyde), or even pulling together a kind of ebook of ramblings. I also want to be able to arrange the posts based on the keywords they contain, regardless of how they’re categorised or tagged.

Whatever I wanted to do, I ended up with a single point of messiness: individual blog posts, and how they’re formatted. When I started, I seem to remember using Drupal’s truly awful WYSIWYG editor, and tweaking the HTML soup it produced. Then, when I moved over to WordPress, it pulled all the posts and metadata through via RSS, and I tweaked with the visual and text tools which are baked into the engine.

markdown

markdown

A couple years ago, I started to write in Markdown, and completely apart from the blog (thanks to full-screen writing and loud music). This gives me a local .md file, and I copy/paste into WordPress using a plugin to get rid of the visual editor entirely.

So, I wrote a scraper to return a list of blog posts containing a specific term. What I hope is that this very simple scraper is useful to others—WordPress is pretty common, after all—and to get some ideas for improving it, and handle post content. If you haven’t used ScraperWiki before, you might not know that you can see the raw scraper by clicking “view source” from the scraper’s overview page (or going here if you’re lazy).

This scraper is based on WordPress’ built-in search, which can be used by passing the search terms to a url, then scraping the resulting page:

http://zachbeauvais.com/?s=search_term&submit=Search

The scraper uses three Python libraries:

There are two variables which can be changed to search for other terms, or using a different WordPress site:

[sourcecode language=”python”]
term = “coffee”
site = “http://www.zachbeauvais.com”
[/sourcecode]

The rest of the script is really simple: it creates a dictionary called “payload” containing the letter “s”, the keyword, and the instruction to search. The “s” is in there to make up the search url: /?s=coffee …

Requests then GETs the site, passing payload as url parameters, and I use Request’s .text function to render the page in html, which I then pass through lxml to the new variable “root”.

[sourcecode language=”python”]
payload = {‘s’: str(term), ‘submit’: ‘Search’}
r = requests.get(site, params=payload) # This’ll be the results page
html = r.text
root = lxml.html.fromstring(html) # parsing the HTML into the var root
[/sourcecode]

Now, my WordPress theme renders the titles of the retrieved posts in <h1> tags with the CSS class “entry-title”, so I loop through the html text, pulling out the links and text from all the resulting h1.entry-title items. This part of the script would need tweaking, depending on the CSS class and h-tag your theme uses.

[sourcecode language=”python”]
for i in root.cssselect(“h1.entry-title a”):
link = i.cssselect(“a”)
text = i.text_content()
data = {
‘uri’: link[0].attrib[‘href’],
‘post-title’: str(text),
‘search-term’: str(term)
}
if i is not None:
print link
print text
print data
scraperwiki.sqlite.save(unique_keys=[‘uri’], data=data)
else:
print “No results.”
[/sourcecode]

These return into an sqlite database via the ScraperWiki library, and I have a resulting database with the title and link to every blog post containing the keyword.

So, this could, in theory, run on any WordPress instance which uses the same search pattern URL—just change the site variable to match.

Also, you can run this again and again, changing the term to any new keyword. These will be stored in the DB with the keyword in its own column to identify what you were looking for.

See? Pretty simple scraping.

So, what I’d like next is to have a local copy of every post in a single format.

Has anyone got any ideas how I could improve this? And, has anyone used WordPress’ JSON API? It might be a logical next step to call the API to get the posts directly from the MySQL DB… but that would be a new blog post!

]]>
https://blog.scraperwiki.com/2013/03/wordpress-titles-scraping-with-search-url/feed/ 1 758218112
Scraping the Royal Society membership list https://blog.scraperwiki.com/2012/12/scraping-the-royal-society-membership-list/ https://blog.scraperwiki.com/2012/12/scraping-the-royal-society-membership-list/#comments Fri, 28 Dec 2012 13:44:38 +0000 https://scraperwiki.wordpress.com/?p=758217875 To a data scientist any data is fair game, from my interest in the history of science I came across the membership records of the Royal Society from 1660 to 2007 which are available as a single PDF file. I’ve scraped the membership list before: the first time around I wrote a C# application which parsed a plain text file which I had made from the original PDF using an online converting service, looking back at the code it is fiendishly complicated and cluttered by boilerplate code required to build a GUI. ScraperWiki includes a pdftoxml function so I thought I’d see if this would make the process of parsing easier, and compare the ScraperWiki experience more widely with my earlier scraper.

The membership list is laid out quite simply, as shown in the image below, each member (or Fellow) record spans two lines with the member name in the left most column on the first line and information on their birth date and the day they died, the class of their Fellowship and their election date on the second line.

Image(1)

Later in the document we find that information on the Presidents of the Royal Society is found on the same line as the Fellow name and that Royal Patrons are formatted a little differently. There are also alias records where the second line points to the primary record for the name on the first line.

pdftoxml converts a PDF into an xml file, wherein each piece of text is located on the page using spatial coordinates, an individual line looks like this:

<text top="243" left="135" width="221" height="14" font="2">Abbot, Charles, 1st Baron Colchester </text>

This makes parsing columnar data straightforward you simply need to select elements with particular values of the “left” attribute. It turns out that the columns are not in exactly the same positions throughout the whole document, which appears to have been constructed by tacking together the membership list A-J with that of K-Z, but this can easily be resolved by accepting a small range of positions for each column.

Attempting to automatically parse all 395 pages of the document reveals some transcription errors: one Fellow was apparently elected on 16th March 197 – a bit of Googling reveals that the real date is 16th March 1978. Another fellow is classed as a “Felllow”, and whilst most of the dates of birth and death are separated by a dash some are separated by an en dash which as far as the code is concerned is something completely different and so on. In my earlier iteration I missed some of these quirks or fixed them by editing the converted text file. These variations suggest that the source document was typed manually rather than being output from a pre-existing database. Since I couldn’t edit the source document I was obliged to code around these quirks.

ScraperWiki helpfully makes putting data into a SQLite database the simplest option for a scraper. My handling of dates in this version of the scraper is a little unsatisfactory: presidential terms are described in terms of a start and end year but are rendered 1st January of those years in the database. Furthermore, in historical documents dates may not be known accurately so someone may have a birth date described as “circa 1782” or “c 1782”, even more vaguely they may be described as having “flourished 1663-1778” or “fl. 1663-1778”. Python’s default datetime module does not capture this subtlety and if it did the database used to store dates would need to support it too to be useful – I’ve addressed this by storing the original life span data as text so that it can be analysed should the need arise. Storing dates as proper dates in the database, rather than text strings means we can query the database using date based queries.

ScraperWiki provides an API to my dataset so that I can query it using SQL, and since it is public anyone else can do this too. So, for example, it’s easy to write queries that tell you the the database contains 8019 Fellows, 56 Presidents, 387 born before 1700, 3657 with no birth date, 2360 with no death date, 204 “flourished”, 450 have birth dates “circa” some year.

I can count the number of classes of fellows:

select distinct class,count(*) from `RoyalSocietyFellows` group by class

Make a table of all of the Presidents of the Royal Society

select * from `RoyalSocietyFellows` where StartPresident not null order by StartPresident desc

…and so on. These illustrations just use the ScraperWiki htmltable export option to display the data as a table but equally I could use similar queries to pull data into a visualisation.

Comparing this to my earlier experience, the benefits of using ScraperWiki are:

  • Nice traceable code to provide a provenance for the dataset;
  • Access to the pdftoxml library;
  • Strong encouragement to “do the right thing” and put the data into a database;
  • Publication of the data;
  • A simple API giving access to the data for reuse by all.

My next target for ScraperWiki may well be the membership lists for the French Academie des Sciences, a task which proved too complex for a simple plain text scraper…

]]>
https://blog.scraperwiki.com/2012/12/scraping-the-royal-society-membership-list/feed/ 4 758217875
5 yr old goes ‘potty’ at Devon and Somerset Fire Service (Emergencies and Data Driven Stories) https://blog.scraperwiki.com/2012/05/5-yr-old-goes-potty/ Fri, 25 May 2012 07:13:33 +0000 http://blog.scraperwiki.com/?p=758216901

It’s 9:54am in Torquay on a Wednesday morning:

One appliance from Torquays fire station was mobilised to reports of a child with a potty seat stuck on its head.

On arrival an undistressed two year old female was discovered with a toilet seat stuck on her head.

Crews used vaseline and the finger kit to remove the seat from the childs head to leave her uninjured.

A couple of different interests directed me to scrape the latest incidents of the Devon and Somerset Fire and Rescue Service. The scraper that has collected the data is here.

Why does this matter?

Everybody loves their public safety workers — Police, Fire, and Ambulance. They save lives, give comfort, and are there when things get out of hand.

Where is the standardized performance data for these incident response workers? Real-time and rich data would revolutionize its governance and administration, would give real evidence of whether there are too many or too few police, fire or ambulance personnel/vehicles/stations in any locale, or would enable the implementation of imaginative and realistic policies resulting from major efficiency and resilience improvements all through the system?

For those of you who want to skip all the background discussion, just head directly over to the visualization.

A rose diagram showing incidents handled by the Devon and Somerset Fire Service

The easiest method to monitor the needs of the organizations is to see how much work each employee is doing, and add more or take away staff depending on their workloads. The problem is, for an emergency service that exists on standby for unforeseen events, there needs to be a level of idle capacity in the system. Also, there will be a degree of unproductive make-work in any organization — Indeed, a lot of form filling currently happens around the place, despite there being no accessible data at the end of it.

The second easiest method of oversight is to compare one area with another. I have an example from California City Finance where the Excel spreadsheet of Fire Spending By city even has a breakdown of the spending per capita and as a percentage of the total city budget. The city to look at is Vallejo which entered bankruptcy in 2008. Many of its citizens blamed this on the exorbitant salaries and benefits of its firefighters and police officers. I can’t quite see it in this data, and the story journalism on it doesn’t provide an unequivocal picture.

The best method for determining the efficient and robust provision of such services is to have an accurate and comprehensive computer model on which to run simulations of the business and experiment with different strategies. This is what Tesco or Walmart or any large corporation would do in order to drive up its efficiency and monitor and deal with threats to its business. There is bound to be a dashboard in Tesco HQ monitoring the distribution of full fat milk across the country, and they would know to three decimal places what percentage of the product was being poured down the drain because it got past its sell-by date, and, conversely, whenever too little of the substance had been delivered such that stocks ran out. They would use the data to work out what circumstances caused changes in demand. For example, school holidays.

I have surveyed many of the documents within the Devon & Somerset Fire & Rescue Authority website, and have come up with no evidence of such data or its analysis anywhere within the organization. This is quite a surprise, and perhaps I haven’t looked hard enough, because the documents are extremely boring and strikingly irrelevant.

Under the hood – how it all works

The scraper itself has gone through several iterations. It currently operates through three functions: MainIndex(), MainDetails(), MainParse(). Data for each incident is put into several tables joined by the IncidentID value derived from the incident’s static url, eg:

http://www.dsfire.gov.uk/News/Newsdesk/IncidentDetail.cfm?IncidentID=7901&siteCategoryId=3&T1ID=26&T2ID=41

MainIndex() operates their search incidents form grabbing 10 days at a time and saving URLs for each individual incident page into the table swdata.

MainDetails() downloads each of those incident pages, parsing the obvious metadata, and saving the remaining HTML content of the description into the database. (This used to attempt to parse the text, but I then had to move it into the third function so I could develop it more easily.) A good way to find the list of urls that have not been downloaded and saved into the swdetails is to use the following SQL statement:

select swdata.IncidentID, swdata.urlpage 
from swdata 
left join swdetails on swdetails.IncidentID=swdata.IncidentID 
where swdetails.IncidentID is null 
limit 5

We then download the HTML from each of the five urlpages, save it into the table under the column divdetails and repeat until no more unmatched records are retrieved.

MainParse() performs the same progressive operation on the HTML contents of divdetails, saving it into the the table swparse. Because I was developing this function experimentally to see how much information I could obtain from the free-form text, I had to frequently drop and recreate enough of the table for the join command to work:

scraperwiki.sqlite.execute("drop table if exists swparse")
scraperwiki.sqlite.execute("create table if not exists swparse (IncidentID text)")

After marking the text down (by replacing the <p> tags with linefeeds), we have text that reads like this (emphasis added):

One appliance from Holsworthy was mobilised to reports of a motorbike on fire. Crew Commander Squirrell was in charge.

On arrival one motorbike was discovered well alight. One hose reel was used to extinguish the fire. The police were also in attendance at this incident.

We can get who is in charge and what their rank is using this regular expression:

re.findall("(crew|watch|station|group|incident|area)s+(commander|manager)s*([w-]+)(?i)", details)

You can see the whole table here including silly names, misspellings, and clear flaws within my regular expression such as not being able to handle the case of a first name and a last name being included. (The personnel misspellings suggest that either these incident reports are not integrated with their actual incident logs where you would expect persons to be identified with their codenumbers, or their record keeping is terrible.)

For detecting how many vehicles were in attenence, I used this algorithm:

appliances = re.findall("(S+) (?:(fire|rescue) )?(appliances?|engines?|tenders?|vehicles?)(?: from ([A-Za-z]+))?(?i)", details)
nvehicles = 0
for scount, fire, engine, town in lappliances:
    if town and "town" not in data:
        data["town"] = town.lower(); 
    if re.match("one|1|an?|another(?i)", scount):  count = 1
    elif re.match("two|2(?i)", scount):            count = 2
    elif re.match("three(?i)", scount):            count = 3
    elif re.match("four(?i)", scount):             count = 4
    else:                                          count = 0
    nvehicles += count

And now onto the visualization

It’s not good enough to have the data. You need to do something with it. See it and explore it.

For some reason I decided that I wanted to graph the hour of the day each incident took place, and produced this time rose, which is a polar bar graph with one sector showing the number of incidents occurring each hour.

You can filter by the day of the week, the number of vehicles involved, the category, year, and fire station town. Then click on one of the sectors to see all the incidents for that hour, and click on an incident to read its description.

Now, if we matched our stations against the list of all stations, and geolocated the incident locations using the Google Maps API (subject to not going OVER_QUERY_LIMIT), then we would be able to plot a map of how far the appliances were driving to respond to each incident. Even better, I could post the start and end locations into the Google Directions API, and get journey times and an idea of which roads and junctions are the most critical.

There’s more. What if we could identify when the response did not come from the closest station, because it was over capacity? What if we could test whether closing down or expanding one of the other stations would improve the performance in response to the database of times, places and severities of each incident? What if each journey time was logged to find where the road traffic bottlenecks are? How about cross-referencing the fire service logs for each incident with the equivalent logs held by the police and ambulance services, to identify the Total Response Cover for the whole incident – information that’s otherwise balkanized and duplicated among the three different historically independent services.

Sometimes it’s also enlightening to see what doesn’t appear in your datasets. In this case, one incident I was specifically looking for strangely doesn’t appear in these Devon and Somerset Fire logs: On 17 March 2011 the Police, Fire and Ambulance were all mobilized in massive numbers towards Goatchurch Cavern – but the Mendip Cave Rescue service only heard about it via the Avon and Somerset Cliff Rescue. Surprise surprise, the event’s missing from my Fire logs database. No one knows anything of what is going on. And while we’re at it, why are they separate organizations anyway?

Next up, someone else can do the Cornwall Fire and Rescue Service and see if they can get their incident search form to work.

]]>
758216901
Protect your scrapers! https://blog.scraperwiki.com/2011/06/protect-your-scrapers/ https://blog.scraperwiki.com/2011/06/protect-your-scrapers/#comments Fri, 17 Jun 2011 13:55:06 +0000 http://blog.scraperwiki.com/?p=758214983 You know how it is.

You wrote your scraper on a whim. Because it’s a wiki, some other people found it, and helped fix bugs in it and extend it.

Time passes.

And now your whole business depends on it.

For when that happens, we’ve just pushed an update that lets you protect scrapers.

This stops new people editing them. It’s for when your scraper is being used in a vital application, or if you’re storing data that you can never get back again just by rescraping it.

To protect one of your scrapers, or indeed views, scroll down on the overview page to the Contributors section.

It says “This scraper is public” – the default, ultra liberal policy. So radical that Zarino, ScraperWiki’s designer, suggested that the icon next to that should be a picture of Richard Stallman. We went for a dinky group of people instead.

If you’re the owner, pick edit to change the security of the scraper. Then choose “Protected”.

You can then use the “Add a new editor” button to let someone new work on the scraper, or “demote” to stop someone working on it.

Protecting scrapers is completely free. Later in the summer we’ll be introducing premium accounts so you can have private scrapers.

]]>
https://blog.scraperwiki.com/2011/06/protect-your-scrapers/feed/ 3 758214983
Cardiff Hacks and Hackers Hacks Day https://blog.scraperwiki.com/2011/03/cardiff-hacks-and-hackers-hacks-day/ https://blog.scraperwiki.com/2011/03/cardiff-hacks-and-hackers-hacks-day/#comments Tue, 15 Mar 2011 16:12:32 +0000 http://blog.scraperwiki.com/?p=758214441 What’s occurin’? Loads in fact, at our first Welsh Hacks and Hackers Hack Day! From schools from space to catering colleges with a Food Safety Standard of 2, we had an amazing day. Check out the video by Gavin Owen:

We got five teams:

Co-Ordnance – This project aimed to be a local business tracker. They wanted to make the London Stock Exchange code into meaningful data, but alas, the stock exchange prevents scraping. So they decided to use company data from registers like the LSE and Companies House to extract business information and structure it for small businesses who need to know best place to set up and for local business activists.

The team consisted of 3 hacks (Steve Fossey, Eva Tallaksen from Intrafish and Gareth Morlais from BBC Cymru) and 3 hackers (Carey HilesCraig Marvelley and Warren Seymour, all from Box UK).

It’s a good thing they had some serious hackers as they had a serious hack on their hands. Here’s a scraper they did for the London Stock Exchange ticker. And here’s what they were able to get done in just one day!

This was just a locally hosted site but the map did allow users to search for types of businesses by region, see whether they’d been dissolved and by what date.

Open Senedd – This project aimed to be a Welsh version of TheyWorkforYou. A way for people in Wales to find out how assembly members voted in plenary meetings. It tackles the worthy task of making assembly members voting records accessible and transparent.

The team consisted of 2 hacks (Daniel Grosvenor from CLIConline and Hannah Waldram from Guardian Cardiff) and 2 hackers (Nathan Collins and Matt Dove).

They spent the day hacking away and drew up an outline for www.opensenedd.org.uk. We look forward to the birth of their project! Which may or may not look something like this (left). Minus Coke can and laptop hopefully!

They took on a lot for a one day project but devolution will not stop the ScraperWiki digger!

There’s no such thing as a free school meal – This project aimed to extract information on Welsh schools from inspection reports. This involved getting unstructure Estyn reports on all 2698 Welsh schools into ScraperWiki.

The team consisted of 1 hack (Izzy Kaminski) and 2 astronomer hackers (Edward Gomez and Stuart Lowe from LCOGT).

This small team managed to scrape Welsh schools data (which the next team stole!) and had time to make a heat map of schools in Wales. This was done using some sort of astronomical tool. Their longer term aim is to overlay the map with information on child poverty and school meals. A worthy venture and we wish them well.

Ysgoloscope – This project aimed to be a Welsh version of Schooloscope. Its aim was to make accessible and interactive information about schools for parents to explore. It used Edward’s scraper of horrible PDF Estyn inspection reports. These had different rating methodology to Ofsted (devolution is not good for data journalism!).

The team consisted of 6 hacks (Joni Ayn Alexander, Chris Bolton, Bethan James from the Stroke Association, Paul Byers, Geraldine Nichols and Rachel Howells), 1 hacker (Ben Campbell from Media Standards Trust) and 1 troublemaker (Esko Reinikainen).

Maybe it was a case of too many hacks or just trying to narrow down what area of local government to tackle, but the result was a plan. Here is their presentation and I’m sure parents all over Wales are hoping to see Ysgoloscope up and running.

Blasus – This project aimed to map food hygiene rating over Wales. They wanted to correlate this information with deprivation indices. They noticed that the Food Standards Agency site does not work. Not for this purpose which is most useful.

The team consisted of 4 hacks (Joe Goodden from the BBC, Alyson Fielding, Charlie Duff from HRZone and Sophie Paterson from the ATRiuM) and 1 hacker (Dafydd Vaughan from CF Labs).

As you can see below they created something which they presented on the day. They used this scraper and made an interactive map with food hygiene ratings, symbols and local information. Amazing for just a day’s work!

And the winners are… (drum roll please)

  • 1st Prize: Blasus
  • 2nd Prize: Open Senedd
  • 3rd Prize: Co-Ordnance
  • Best Scoop: Blasus for finding  a catering college in Merthyr with a Food Hygiene Standard rating of just 2
  • Best Scraper: Co-Ordnance

A big shout out

To our judges Glyn Mottershead from Cardiff School of Journalism, Media and Cultural Studies, Gwawr Hughes from Skillset and Sean Clarke from The Guardian.

And our sponsors Skillset, Guardian Platform, Guardian Local and Cardiff School of Journalism, Media and Cultural Studies.

Schools, businesses and eating place of Wales – you’ve been ScraperWikied!

Blasus winning first prize and Best Scoop award (prizes will be delivered, sealed with a handshake from our sponsor).


]]>
https://blog.scraperwiki.com/2011/03/cardiff-hacks-and-hackers-hacks-day/feed/ 8 758214441
Be alert! Your scrapers need alerts https://blog.scraperwiki.com/2011/01/be-alert-your-scrapers-need-lerts/ https://blog.scraperwiki.com/2011/01/be-alert-your-scrapers-need-lerts/#comments Mon, 31 Jan 2011 11:24:03 +0000 http://blog.scraperwiki.com/?p=758214208 It’s important to know when your scrapers have stopped working, so you can fix them.

And if someone else makes a change to one of your scrapers, you need to know, so you can check it’s OK and thank them.

Over the next day or two, if you have made or contributed to a scraper on ScraperWiki, you’ll start to see emails like this.

They happen once a day. If that’s too much, there’s a link at the bottom so you can unsubscribe on your profile page.



We’ve been testing this in the team for a couple of weeks, but I’m sure you’ll have suggestions and ideas for improving it. Let us know!

]]>
https://blog.scraperwiki.com/2011/01/be-alert-your-scrapers-need-lerts/feed/ 1 758214208