We’ll analyse and improve your business data flows.
We make processes more efficient. We’ll introduce automation, we’ll give you cleaner data flows, and we’ll help you draw new insights from your data.
We build data rich web applications.
We’re experienced in building websites which front large collections of data. This gives the user the power to explore and understand the data, to reuse and repurpose it and to visualise it in imaginative ways.
We’ve a team of highly qualified scientists.
We’re able to understand and implement complex analysis in natural language processing, machine learning, quantitative analysis, and artificial intelligence.
Industry case studies:
Increasing Government impact and usability with data science and data engineering
Turning a sea of public and private data into organizational action and global change
How we engage
We can help you build a vision for improving your business operations. We bring deep knowledge of a wide range of technologies and techniques, you bring an intimate knowledge of your domain.
To help build a vision, ScraperWiki carry out a piece of work called “Discovery”.
This phase takes between 5 and 20 days work, depending on the scale of the project.
Based on an evaluation of User Needs, ScraperWiki will report on what the minimum viable product (MVP) will look like, and estimate its cost.
We follow standard Agile processes to deliver Alpha, Beta and Live phases of the project, a key feature of this process is to continually deliver usable functionality.
The delivery will be structured into a series of “Sprints”, typically lasting 2 weeks, and culminating in a show and tell demo of the new features developed. Communication is at the heart of Agile, and brief daily “Stand-up” reports on progress are used to ensure all stakeholders are aware of progress.
We’ve implemented this model successfully for a variety of organisations.