We are therefore very pleased to announce that ScraperWiki has won the Knight News Challenge!
The Knight Foundation are spending $280,000 over 2 years for us to improve ScraperWiki as a platform for journalists, and to run events to bring together journalists and programmers across the United States.
America has trailblazing organisations that do data and journalism well already – for example, both ProPublica and the Chicago Tribune have excellent data centers to support their news content. Our aim is to lower the barrier to entry into data driven journalism and to create (an order of magnitude) more of this type of success. So come join our campaign for America: Yes We Can (Scrape). PS: We are politically neutral but think open source when it comes to campaign strategy!
What are we going to do to the platform?
As well as polishing ScraperWiki to make it easier to use, and creating journalism focussed tutorials and screen casts, we’re adding four specific services for journalists:
- Data embargo, so journalists can keep their stories secret until going to print, but publish the data in a structured, reusable, public form with the story.
- Data on demand service. Often journalists need the right data ordered quickly, we’re going to create a smooth process so they can get that.
- News application hosting. We’ll make it scalable and easier.
- Data alerts. Automatically get leads from changing data. For example, watch bridge repair schedules, and email when one isn’t being maintained.
Here are two concrete examples of ScraperWiki being used already in similar ways:
- We manually embargoed the MP data that James Ball demanded off us for his Guardian front page.
- Edinburgh planning news application are embedded in the Greener Leith site, and now have alerts in the form of a Twitter bot.
Where in the US are we going to go?
What really matters about ScraperWiki is the people using it. Data is dead if it doesn’t have someone, a journalist or a citizen, analysing it, finding stories in it and making decisions from it.
We’re running Data Journalism Camps in each of a dozen states. These will be similar in format to our hacks and hackers hack days, which we’ve run across the UK and Ireland over the last year.
The camps will have two parts.
- Making something. In teams of journalists and coders, using data to dig into a story, or make or prototype a news app, all in one day.
- Scraping tutorials. For journalists who want to learn how to code, and programmers who want to know more about scraping and ScraperWiki.
This video of our event in Liverpool gives a flavour of what to expect.
Get in touch if you’d like us to stop near you, or are interested in helping or sponsoring the camps.
The project is designed to be financially stable in the long term. While the public version of ScraperWiki will remain free, we will charge for extra services such as keeping data private, and data on demand. We’ll be working with B2B media, as well as consumer media.
As all Knight financed projects, the code behind ScraperWiki is open source, so newsrooms won’t be building a dependency on something they can’t control.
For more details you can read our original application (note that financial amounts have changed since then).
Finally, and most importantly, I’d like to congratulate and thank everyone who has worked on, used or supported ScraperWiki. The Knight News Challenge had 1,600 excellent applications, so this is a real validation of what we’re doing, both with data and with journalism.