book review – ScraperWiki https://blog.scraperwiki.com Extract tables from PDFs and scrape the web Tue, 09 Aug 2016 06:10:13 +0000 en-US hourly 1 https://wordpress.org/?v=4.6 58264007 Book review: Docker Up & Running by Karl Matthias and Sean P. Kane https://blog.scraperwiki.com/2015/07/book-review-docker-up-running-by-karl-matthias-and-sean-p-kane/ Fri, 17 Jul 2015 11:00:56 +0000 https://blog.scraperwiki.com/?p=758223401 This last week I have been reading dockerDocker Up & Running by Karl Matthias and Sean P. Kane, a newly published book on Docker – a container technology which is designed to simplify the process of application testing and deployment.

Docker is a very new product, first announced in March 2013, although it is based on older technologies. It has seen rapid uptake by a number of major web-based companies who have open-sourced their tooling for using Docker. We have been using Docker at ScraperWiki for some time, and our most recent projects use it in production. It addresses a common problem for which we have tried a number of technologies in search of a solution.

For a long time I have thought of Docker as providing some sort of cut down virtual machine, from this book I realise this is the wrong mindset – it is better to think of it as a “process wrapper”. The “Advanced Topics” chapter of this book explains how this is achieved technically. This makes Docker a much lighter weight, faster proposition than a virtual machine.

Docker is delivered as a single binary containing both client and server components. The client gives you the power to build Docker images and query the server which hosts the running Docker images. The client part of this system will run on Windows, Mac and Linux systems. The server will only run on Linux due to the specific Linux features that Docker utilises in doing its stuff. Mac and Windows users can use boot2docker to run a Docker server, boot2docker uses a minimal Linux virtual machine to run the server which removes some of the performance advantages of Docker but allows you to develop anywhere.

The problem Docker and containerisation are attempting to address is that of capturing the dependencies of an application and delivering them in a convenient package. It allows developers to produce an artefact, the Docker Image, which can be handed over to an operations team for deployment without to and froing to get all the dependencies and system requirements fixed.

Docker can also address the problem of a development team onboarding a new member who needs to get the application up and running on their own system in order to develop it. Previously such problems were addressed with a flotilla of technologies with varying strengths and weaknesses, things like Chef, Puppet, Salt, Juju, virtual machines. Working at ScraperWiki I saw each of these technologies causing some sort of pain. Docker may or may not take all this pain away but it certainly looks promising.

The Docker image is compiled from instructions in a Dockerfile which has directives to pull down a base operating system image from a registry, add files, run commands and set configuration. The “image” language is probably where my false impression of Docker as virtualisation comes from. Once we have made the Docker image there are commands to deploy and run it on a server, inspect any logging and do debugging of a running container.

Docker is not a “total” solution, it has nothing to say about triggering builds, or bringing up hardware or managing clusters of servers. At ScraperWiki we’ve been developing our own systems to do this which is clearly the approach that many others are taking.

Docker Up & Running is pretty good at laying out what it is you should do with Docker, rather than what you can do with Docker. For example the book makes clear that Docker is best suited to hosting applications which have no state. You can copy files into a Docker container to store data but then you’d need to work out how to preserve those files between instances. Docker containers are expected to be volatile – here today gone tomorrow or even here now, gone in a minute. The expectation is that you should preserve state outside of a container using environment variables, Amazon’s S3 service or a externally hosted database etc – depending on the size of the data. The material in the “Advanced Topics” chapter highlights the possible Docker runtime options (and then advises you not to use them unless you have very specific use cases). There are a couple of whole chapters on Docker in production systems.

If my intention was to use Docker “live and in anger” then I probably wouldn’t learn how to do so from this book since the the landscape is changing so fast. I might use it to identify what it is that I should do with Docker, rather than what I can do with Docker. For the application side of ScraperWiki’s business the use of Docker is obvious, for the data science side it is not so clear. For our data science work we make heavy use of Python’s virtualenv system which captures most of our dependencies without being opinionated about data (state).

The book has information in it up until at least the beginning of 2015. It is well worth reading as an introduction and overview of Docker.

Dr Ian Hopkinson is Senior Data Scientist at ScraperWiki, where we often use Docker to help customers manage their data. You can read more about our professional services here.

]]>
758223401
Book Review: Learning Spark by Holden Karau, Andy Konwinski, Patrick Wendell and Matei Zaharia https://blog.scraperwiki.com/2015/07/book-review-learning-spark-by-holden-karau-andy-konwinski-patrick-wendell-and-matei-zaharia/ Mon, 06 Jul 2015 10:00:46 +0000 https://blog.scraperwiki.com/?p=758223243 learning-spark-book-coverApache Spark is a system for doing data analysis which can be run on a single machine or across a cluster, it  is pretty new technology – initial work was in 2009 and Apache adopted it in 2013. There’s a lot of buzz around it, and I have a problem for which it might be appropriate. The goal of Spark is to be faster and more amenable to iterative and interactive development than Hadoop MapReduce, a sort of Ipython of Big Data. I used my traditional approach to learning more of buying a dead-tree publication, Learning Spark by Holden Karau, Andy Konwinski, Patrick Wendell and Matei Zaharia, and then reading it on my commute.

The core of Spark is the resilient distributed dataset (RDD), a data structure which can be distributed over multiple computational nodes. Creating an RDD is as simple as passing a file URL to a constructor, the file may be located on some Hadoop style system, or parallelizing an in-memory data structure. To this data structure are added transformations and actions. Transformations produce another RDD from an input RDD, for example filter() returns an RDD which is the result of applying a filter to each row in the input RDD. Actions produce a non-RDD output, for example count() returns the number of elements in an RDD.

Spark provides functionality to control how parts of an RDD are distributed over the available nodes i.e. by key. In addition there is functionality to share data across multiple nodes using “Broadcast Variables”, and to aggregate results in “Accumulators”. The behaviour of Accumulators in distributed systems can be complicated since Spark might preemptively execute the same piece of processing twice because of problems on a node.

In addition to Spark Core there are Spark Streaming, Spark SQL, MLib machine learning, GraphX and SparkR modules. Learning Spark covers the first three of these. The Streaming module handles data such as log files which are continually growing over time using a DStream structure which is comprised of a sequence of RDDs with some additional time-related functions. Spark SQL introduces the DataFrame data structure (previously called SchemaRDD) which enables SQL-like queries using HiveQL. The MLlib library introduces a whole bunch of machine learning algorithms such as decision trees, random forests, support vector machines, naive Bayesian and logistic regression. It also has support routines to normalise and analyse data, as well as clustering and dimension reduction algorithms.

All of this functionality looks pretty straightforward to access, example code is provided for Scala, Java and Python. Scala is a functional language which runs on the Java virtual machine so appears to get equivalent functionality to Java. Python, on the other hand, appears to be a second class citizen. Functionality, particularly in I/O, is missing Python support. This does beg the question as to whether one should start analysis in Python and make the switch as and when required or whether to start in Scala or Java where you may well be forced anyway. Perhaps the intended usage is Python for prototyping and Java/Scala for production.

The book is pitched at two audiences, data scientists and software engineers as is Spark. This would explain support for Python and (more recently) R, to keep the data scientists happy and Java/Scala for the software engineers. I must admit looking at examples in Python and Java together, I remember why I love Python! Java requires quite a lot of class declaration boilerplate to get it into the air, and brackets.

Spark will run on a standalone machine, I got it running on Windows 8.1 in short order. Analysis programs appear to be deployable to a cluster unaltered with the changes handled in configuration files and command line options. The feeling I get from Spark is that it would be entirely appropriate to undertake analysis with Spark which you might do using pandas or scikit-learn locally, and if necessary you could scale up onto a cluster with relatively little additional effort rather than having to learn some fraction of the Hadoop ecosystem.

The book suffers a little from covering a subject area which is rapidly developing, Spark is currently at version 1.4 as of early June 2015, the book covers version 1.1 and things are happening fast. For example, GraphX and SparkR, more recent additions to Spark are not covered. That said, this is a great little introduction to Spark, I’m now minded to go off and apply my new-found knowledge to the Kaggle – Avito Context Ad Clicks challenge!

]]>
758223243
Book review: Mastering Gephi Network Visualisation by Ken Cherven https://blog.scraperwiki.com/2015/06/book-review-mastering-gephi-network-visualisation-by-ken-cherven/ Mon, 15 Jun 2015 07:49:27 +0000 https://blog.scraperwiki.com/?p=758223042 1994_7344OS_Mastering Gephi Network VisualizationA little while ago I reviewed Ken Cherven’s book Network Graph Analysis and Visualisation with Gephi, it’s fair to say I was not very complementary about it. It was rather short, and had quite a lot of screenshots. It’s strength was in introducing every single element of the Gephi interface. This book, Mastering Gephi Network Visualisation by Ken Cherven is a different, and better, book.

Networks in this context are collections of nodes connected by edges, networks are ubiquitous. The nodes may be people in a social network, and the edges their friendships. Or the nodes might be proteins and metabolic products and the edges the reaction pathways between them. Or any other of a multitude of systems. I’ve reviewed a couple of other books in this area including Barabási’s popular account of the pervasiveness of networks, Linked, and van Steen’s undergraduate textbook, Graph Theory and Complex Networks, which cover the maths of network (or graph) theory in some detail.

Mastering Gephi is a practical guide to using the Gephi Network visualisation software, it covers the more theoretical material regarding networks in a peripheral fashion. Gephi is the most popular open source network visualisation system of which I’m aware, it is well-featured and under active development. Many of the network visualisations you see of, for example, twitter social networks, will have been generated using Gephi. It is a pretty complex piece of software, and if you don’t want to rely on information on the web, or taught courses then Cherven’s books are pretty much your only alternative.

The core chapters are on layouts, filters, statistics, segmenting and partitioning, and dynamic networks. Outside this there are some more general chapters, including one on exporting visualisations and an odd one on “network patterns” which introduced diffusion and contagion in networks but then didn’t go much further.

I found the layouts chapter particularly useful, it’s a review of the various layout algorithms available. In most cases there is no “correct” way of drawing a network on a 2D canvas, layout algorithms are designed to distribute nodes and edges on a canvas to enable the viewer to gain understanding of the network they represent.  From this chapter I discovered the directed acyclic graph (DAG) layout which can be downloaded as a Gephi plugin. Tip: I had to go search this plugin out manually in the Gephi Marketplace, it didn’t get installed when I indiscriminately tried to install all plugins. The DAG layout is good for showing tree structures such as organisational diagrams.

I learnt of the “Chinese Whispers” and “Markov clustering” algorithms for identifying clusters within a network in the chapter on segmenting and partitioning. These algorithms are not covered in detail but sufficient information is provided that you can try them out on a network of your choice, and go look up more information on their implementation if desired. The filtering chapter is very much about the mechanics of how to do a thing in Gephi (filter a network to show a subset of nodes), whilst the statistics chapter is more about the range of network statistical measures known in the literature.

I was aware of the ability of Gephi to show dynamic networks, ones that evolved over time, but had never experimented with this functionality. Cherven’s book provides an overview of this functionality using data from baseball as an example. The example datasets are quite appealing, they include social networks in schools, baseball, and jazz musicians. I suspect they are standard examples in the network literature, but this is no bad thing.

The book follows the advice that my old PhD supervisor gave me on giving presentations: tell the audience what you are go to tell them, tell them and then tell them what you told them. This works well for the limited time available in a spoken presentation, repetition helps the audience remember, but it feels a bit like overkill in a book. In a book we can flick back to remind us what was written earlier.

It’s a bit frustrating that the book is printed in black and white, particularly at the point where we are asked to admire the blue and yellow parts of a network visualisation! The referencing is a little erratic with a list of books appearing in the bibliography but references to some of the detail of algorithms only found in the text.

I’m happy to recommend this book as a solid overview of Gephi for those that prefer to learn from dead tree, such as myself. It has good coverage of Gephi features, and some interesting examples. In places it is a little shallow and repetitive.

The publisher sent me this book, free of charge, for review.

]]>
758223042
Book review: Cryptocurrency by Paul Vigna and Michael J. Casey https://blog.scraperwiki.com/2015/05/cryptocurrency-by-paul-vigna-and-michael-j-casey/ Fri, 22 May 2015 12:18:43 +0000 https://blog.scraperwiki.com/?p=758222909 cryptocurrencyAmongst hipster start ups in the tech industry Bitcoin has been a thing for a while. As one of the more elderly members of this community I wanted to understand a bit more about it. Cryptocurrency: How Bitcoin and Digital Money are Challenging the Global Economic Order by Paul Vigna and Michael Casey fits this bill.

Bitcoin is a digital currency: the Bitcoin has a value which can be exchanged against other currencies but it has no physical manifestation. The really interesting thing is how Bitcoins move around without any central authority, there is no Bitcoin equivalent of the Visa or BACS payment systems with their attendant organisations or central back as in the case of a normal currency. This division between Bitcoin as currency and Bitcoin as decentralised exchange mechanism is really important.

Conventional payment systems like Visa have a central organisation which charges retailers a percentage on every payment made using their system. This is exceedingly lucrative. Bitcoin replaces this with the blockchain – a distributed ledger in which transactions are encrypted. The validation is carried out by so-called ‘miners’ who are paid in Bitcoin for carrying out a computationally intensive encryption task which ensures the scarcity of Bitcoin and helps maintain its value. In principle anybody can be a Bitcoin miner, all they need is the required free software and the ability to run the software. The generation of new Bitcoin is strictly controlled by the fundamental underpinnings of the blockchain software. Bitcoin miners are engaged in a hardware arms race with each other as they compete to complete units on the blockchain, more processing power equals more chances to complete blocks ahead of the competition and hence win more Bitcoin. In practice mining meaningful quantities these days requires significant, highly specialised hardware.

Vigna and Casey provide a history of Bitcoin starting with a bit of background as to how economists see currency, this amounts to the familiar division between the Austrian school and the Keynesians. The Austrians are interested in currency as gold, whilst the Keynesians are interested in Bitcoin as a medium for exchange. As a currency Bitcoin doesn’t appeal to Keysians since there is no “quantitative easing” in Bitcoin, the government can’t print money.

Bitcoin did not appear from nowhere, during the late 90s and early years of the 20th century there were corporate attempts at building digital currencies. These died away, they had the air of lone wolf operations hidden within corporate structures which met their end perhaps when they filtered up to a certain level and their threat to the current business model was revealed. Or perhaps in the chaos of the financial collapse.

More pertinently there were the cypherpunks, a group interested in cryptography operating on the non-governmental, non-corporate side of the community. This group was also experimenting with ideas around digital currencies. This culminated in 2008 with the launch of Bitcoin, by the elusive Satoshi Nakamoto, to a cryptography mailing list. Nakamoto has since disappeared, no one has ever met him, no one knows whether he is the pseudonym of one of the cypherpunks, and if so, which one.

Following its release Bitcoin experienced a period of organic growth with cryptography enthusiasts and the technically curious. With the Bitcoin currency growing an ecosystem started to grow around it beginning with more user-friendly routes to accessing the blockchain – wallets to hold your Bitcoins, digital currency exchanges and tools to inspect the transactions on the blockchain.

Bitcoin has suffered reverses, most notoriously the collapse of the Mt Gox currency exchange and its use in the Silk Road market, which specialised in illegal merchandise. The Mt Gox collapse demonstrated both flaws in the underlying protocol and its vulnerability to poorly managed components in the ecosystem. Alongside this has been the wildly fluctuating value of the Bitcoin against other conventional currencies.

One of the early case studies in Cryptocurrency is of women in Afghanistan, forbidden by social pressure if not actual law from owning private bank accounts. Bitcoin provides them with a means for gaining independence and control over at least some financial resources. There is the prospect of it becoming the basis of a currency exchange system for the developing world where transferring money within a country or sending money home from the developed world are as yet unsolved problems, beset both with uncertainty and high costs.

To my mind Bitcoin is an interesting idea, as a traditional currency it feels like a non-starter but as a decentralized transaction mechanism it looks very promising. The problem with decentralisation is: who do you hold accountable? In two senses, firstly the technical sense – what if the software is flawed? Secondly, conventional currencies are backed by countries not software, a country has a stake in the success of a currency and the means to execute strategies to protect it. Bitcoin has the original vision of a vanished creator, and a very small team of core developers. As an aside Vigna and Casey point out there is a limit within Bitcoin of 7 transactions per second which compares with 10,000 transactions per second handled by the Visa network.

It’s difficult to see what the future holds for Bitcoin, Vigna and Casey run through some plausible scenarios. Cryptocurrency is well-written, comprehensive and pitched at the right technical level.

]]>
758222909
Book review: How Linux works by Brian Ward https://blog.scraperwiki.com/2015/04/book-review-how-linux-works-by-brian-ward/ Tue, 14 Apr 2015 07:43:18 +0000 https://blog.scraperwiki.com/?p=758222748 hlw2e_cover-new_webA break since my last book review since I’ve been coding, rather than reading, on the commute into the ScraperWiki offices in Liverpool. Next up is How Linux Works by Brian Ward. In some senses this book follows on from Data Science at the Command Line by Jeroen Janssens. Data Science was about doing analysis with command line incantations, How Linux Works tells us about the system in which that command line exists and makes the incantations less mysterious.

I’ve had long experience with doing analysis on Windows machines, typically using Matlab, but over many years I have also dabbled with Unix systems including Silicon Graphics workstations, DEC Alphas and, more recently, Linux. These days I use Ubuntu to ensure compatibility with my colleagues and the systems we deploy to the internet. Increasingly I need to know more about the underlying operating system.

I’m looking to monitor system resources, manage devices and configure my environment. I’m not looking for a list of recipes, I’m looking for a mindset. How Linux Works is pretty good in this respect. I had a fair understanding of pipes in *nix operating systems before reading the book, another fundamental I learnt from How Linux Works was understanding that files are used to represent processes and memory. The book is also good on where these files live – although this varies a bit with distribution and time. Files are used liberally to provide configuration.

The book has 17 chapters covering the basics of Linux and the directory hierarchy, devices and disks, booting the kernel and user space, logging and user management, monitoring resource usage, networking and aspects of shell scripting and developing on Linux systems. They vary considerably in length with those on developing relatively short. There is an odd chapter on rsync.

I got a bit bogged down in the chapters on disks, how the kernel boots, how user space boots and networking. These chapters covered their topics in excruciating detail, much more than required for day to day operations. The user startup chapter tells us about systemd, Upstart and System V init – three alternative mechanisms for booting user space. Systemd is the way of the future, in case you were worried. Similarly, the chapters on booting the kernel and managing disks at a very low level provide more detail than you are ever likely to need. The author does suggest the more casual reader skip through the more advanced areas but frankly this is not a directive I can follow. I start at the beginning of a book and read through to the end, none of this “skipping bits” for me!

The user environments chapter has a nice section explaining clearly the sequence of files accessed for profile information when a terminal window is opened, or other login-like activity. Similarly the chapters on monitoring resources seem to be pitched at just the right level.

Ward’s task is made difficult by the complexity of the underlying system. Linux has an air of “If it’s broke, fix it and if ain’t broke, fix it anyway”. Ward mentions at one point that a service in Linux had not changed for a while therefore it was ripe for replacement! Each new distribution appears to have heard about standardisation (i.e. where to put config files) but has chosen to ignore it. And if there is consistency in the options to Linux commands it is purely co-incidental. I think this is my biggest bugbear in Linux, I know which command to use but the right option flags are more just blindly remembered.

The more Linux-oriented faction of ScraperWiki seemed impressed by the coverage of the book. The chapter on shell scripting is enlightening, providing the mindset rather than the detail, so that you can solve your own problems. It’s also pragmatic in highlighting where to to step in shell scripting and move to another language. I was disturbed to discover that the open-square bracket character in shell script is actually a command. This “explain the big picture rather than trying to answer a load of little questions”, is a mark of a good technical book.  The detail you can find on Stackoverflow or other Googling.

How Linux Works has a good bibliography, it could do with a glossary of commands and an appendix of the more in depth material. That said it’s exactly the book I was looking for, and the writing style is just right. For my next task I will be filleting it for useful commands, and if someone could see their way to giving me a Dell XPS Developer Edition for “review”, I’ll be made up.

]]>
758222748
Book review: Data Science at the Command Line by Jeroen Janssens https://blog.scraperwiki.com/2015/02/book-review-data-science-at-the-command-line-by-jeroen-janssens/ https://blog.scraperwiki.com/2015/02/book-review-data-science-at-the-command-line-by-jeroen-janssens/#comments Tue, 10 Feb 2015 11:29:25 +0000 https://blog.scraperwiki.com/?p=758222531 datascienceatthecommandlineIn the mixed environment of ScraperWiki we make use of a broad variety of tools for data analysis. Data Science at the Command Line by Jeroen Janssens covers tools available at the Linux command line for doing data analysis tasks. The book is divided thematically into chapters on Obtaining, Scrubbing, Modeling, Interpreting Data with “intermezzo” chapters on parameterising shell scripts, using the Drake workflow tool and parallelisation using GNU Parallel.

The original motivation for the book was a desire to move away from purely GUI based approaches to data analysis (I think he means Excel and the Windows ecosystem). This is a common desire for data analysts, GUIs are very good for a quick look-see but once you start wanting to repeat analysis or even repeat visualisation they become more troublesome. And launching Excel just to remove a column of data seems a bit laborious. Windows does have its own command line, PowerShell, but it’s little used by data scientists. This book is about the Linux command line, examples are all available on a virtual machine populated with all of the tools discussed in the book.

The command line is at its strongest with the early steps of the data analysis process, getting data from places, carrying out relatively minor acts of tidying and answering the question “does my data look remotely how I expect it to look?”. Janssens introduces the battle tested tools sed, awk, and cut which we use around the office at ScraperWiki. He also introduces jq (the JSON parser), this is a more recent introduction but it’s great for poking around in JSON files as commonly delivered by web APIs. An addition I hadn’t seem before was csvkit which provides a suite of tools for processing CSV at the command line, I particularly like the look of csvstat. csvkit is a Python tool and I can imagine using it directly in Python as a library.

The style of the book is to provide a stream of practical examples for different command line tools, and illustrate their application when strung together. I must admit to finding shell commands deeply cryptic in their presentation with chunks of options effectively looking like someone typing a strong password. Data Science is not an attempt to clear the mystery of these options more an indication that you can work great wonders on finding the right incantation.

Next up is the Rio tool for using R at the command line, principally to generate plots. I suspect this is about where I part company with Janssens on his quest to use the command line for all the things. Systems like R, ipython and the ipython notebook all offer a decent REPL (read-evaluation-print-loop) which will convert seamlessly into an actual program. I find I use these REPLs for experimentation whilst I build a library of analysis functions for the job at hand. You can write an entire analysis program using the shell but it doesn’t mean you should!

Weka provides a nice example of smoothing the command line interface to an established package. Weka is a machine learning library written in Java, it is the code behind Data Mining: Practical Machine Learning Tools and techniques. The edges to be smoothed are that the bare command line for Weka is somewhat involved since it requires a whole pile of boilerplate. Janssens demonstrates nicely how to do this by developing automatically autocompletion hints for the parts of Weka which are accessible from the command line.

The book starts by pitching the command line as a substitute for GUI driven applications which is something I can agree with to at least some degree. It finishes by proposing the command line as a replacement for a conventional programming language with which I can’t agree. My tendency would be to move from the command line to Python fairly rapidly perhaps using ipython or ipython notebook as a stepping stone.

Data Science at the Command Line is definitely worth reading if not following religiously. It’s a showcase for what is possible rather than a reference book as to how exactly to do it.

]]>
https://blog.scraperwiki.com/2015/02/book-review-data-science-at-the-command-line-by-jeroen-janssens/feed/ 4 758222531
Book review: Remote Pairing by Joe Kutner https://blog.scraperwiki.com/2015/02/book-review-remote-pairing-by-joe-kutner/ https://blog.scraperwiki.com/2015/02/book-review-remote-pairing-by-joe-kutner/#comments Mon, 09 Feb 2015 14:38:46 +0000 https://blog.scraperwiki.com/?p=758222521 jkrp_xlargecoverPair programming is an important part of the Agile process but sometimes the programmers are not physically co-located. At ScraperWiki we have staff who do both scheduled and ad hoc remote working therefore methods for working together remotely are important to us. A result of a casual comment on Twitter, I picked up “Remote Pairing” by Joe Kutner which covers just this subject.

Remote Pairing is a short volume, less than 100 pages. It starts for a motivation for pair programming with some presentation of the evidence for its effectiveness. It then goes on to cover some of the more social aspects of pairing – how do you tell your partner you need a “comfort break”? This theme makes a slight reprise in the final chapter with some case studies of remote pairing. And then into technical aspects.

The first systems mentioned are straightforward audio/visual packages including Skype and Google Hangouts. I’d not seen ScreenHero previously but it looks like it wouldn’t be an option for ScraperWiki since our developers work primarily in Ubuntu; ScreenHero only supports Windows and OS X currently. We use Skype regularly for customer calls, and Google Hangouts for our daily standup. For pairing we typically use appear.in which provides audio/visual connections and screensharing without the complexities of wrangling Google’s social ecosystem which come into play when we try to use Google Hangouts.

But these packages are not about shared interaction, for this Kutner starts with the vim/tmux combination. This is venerable technology built into Linux systems, or at least easily installable. Vim is the well-known editor, tmux allows a user to access multiple terminal sessions inside one terminal window. The combination allows programmers to work fully collaboratively on code, both partners can type into the same workspace. You might even want to use vim and tmux when you are standing next to one another. The next chapter covers proxy servers and tmate (a fork of tmux) which make the process of sharing a session easier by providing tunnels through the Cloud.

Remote Pairing then goes on to cover interactive screensharing using vnc and NoMachine, these look like pretty portable systems. Along with the chapter on collaborating using plugins for IDEs this is something we have not used at ScraperWiki. Around the office none of us currently make use of full blown IDEs despite having used them in the past. Several of us use Sublime Text for which there is a commercial sharing product (floobits) but we don’t feel sufficiently motivated to try this out.

The chapter on “building a pairing server” seems a bit out of place to me, the content is quite generic. Perhaps because at ScraperWiki we have always written code in the Cloud we take it for granted. The scheme Kutner follows uses vagrant and Puppet to configure servers in the Cloud. This is a fairly effective scheme. We have been using Docker extensively which is a slightly different thing, since a Docker container is not a virtual machine.

Are we doing anything different in the office as a result of this book? Yes – we’ve got a good quality external microphone (a Blue Snowball), and it’s so good I’ve got one for myself. Managing audio is still something that seems a challenge for modern operating systems. To a human it seems obvious that if we’ve plugged in a headset and opened up Google Hangouts then we might want to talk to someone and that we might want to hear their voice too. To a computer this seems unimaginable. I’m looking to try out NoMachine when a suitable occasion arises.

Remote Pairing is a handy guide for those getting started with remote working, and it’s a useful summary for those wanting to see if they are missing any tricks.

]]>
https://blog.scraperwiki.com/2015/02/book-review-remote-pairing-by-joe-kutner/feed/ 2 758222521
Book review: Graph Databases by Ian Robinson, Jim Webber and Emil Eifrem https://blog.scraperwiki.com/2015/01/book-review-graph-databases-by-ian-robinson-jim-webber-and-emil-eifrem/ https://blog.scraperwiki.com/2015/01/book-review-graph-databases-by-ian-robinson-jim-webber-and-emil-eifrem/#comments Sat, 03 Jan 2015 07:36:07 +0000 https://blog.scraperwiki.com/?p=758222475 graphdatabasesRegular readers will know I am on a bit of a graph binge at the moment. In computer science and mathematics graphs are collections of nodes joined by edges, they have all sorts of applications including the study of social networks and route finding. Having covered graph theory and visualisation, I now move on to graph databases. I started on this path with Seven Databases in Seven Weeks which introduces the Neo4j graph database.

And so to Graph Databases by Ian Robinson, Jim Webber and Emil Eifrem which, despite its general title, is really a book about Neo4j. This is no big deal since Neo4j is the leading open source graph database.

This is not just random reading, we’re working on an EU project, NewsReader, which makes significant use of RDF – a type of graph-shaped data. We’re also working on a project for a customer which involves traversing a hierarchy of several thousand nodes. This leads to some rather convoluted joining operations when done on a SQL database, a graph database might be better suited to the problem.

The book starts with some definitions, identifying the types of graph database (property graph, hypergraph, RDF). Neo4j uses property graphs where nodes and edges are distinct items and each can hold properties. In contrast RDF graphs are expressed as triples which encompass both edges and nodes. In hypergraphs multiple edges can be expressed as a single item. A second set of definitions are regarding the types of graph processing system: graph databases and graph analytical engines. Neo4j is designed to provide good performance for database-like queries, acting as a backing store for a web application rather than an analytical engine to carry out offline calculations. There’s also an Appendix comparing NoSQL databases which feels like it should be part of the introduction.

A key feature of native graph databases, such as Neo4j, is “index-free adjacency”. The authors don’t seem to define this well early in the book but later on whilst discussing the internals of Neo4j it is all made clear: nodes and edges are stored as fixed length records with references to a list of nodes to which they are connected. This means its very fast to visit a node, and then iterate over all of its attached neighbours. The alternative index-based lookups may involve scanning a whole table to find all links to a particular node. It is in the area of traversing networks that Neo4j shines in performance terms compared to SQL.

As Robinson et al emphasise in motivating the use of graph databases: Other types of NoSQL database and SQL databases are not built fundamentally around the idea of relationships between data except in quite a constrained sense. For SQL databases there is an overhead to carrying out join queries which are SQLs way of introducing relationships. As I hinted earlier storing hierarchies in SQL databases leads to some nasty looking, slow queries. In practice SQL databases are denormalised for performance reasons to address these cases. Graph databases, on the other hand, are all about relationships.

Schema are an important concept in SQL databases, they are used to enforce constraints on a database i.e. “this thing must be a string” or “this thing must be in this set”. Neo4j describes itself as “schema optional”, the schema functionality seems relatively recently introduced and is not discussed in this book although it is alluded to. As someone with a small background in SQL the absence of schema in NoSQL databases is always the cause of some anxiety and distress.

A chapter on data modelling and the Cypher query language feels like the heart of the book. People say that Neo4j is “whiteboard friendly” in that if you can draw a relationship structure on a whiteboard then you can implement it in Neo4j without going through the rigmarole of making some normalised schema that doesn’t look like what you’ve drawn. This seems fair up to a point, your whiteboard scribbles do tend to be guided to a degree by what your target system is, and you can go wrong with your data model going from whiteboard to data model, even in Neo4j.

I imagine it is no accident that more recent query languages like Cypher and SPARQL look a bit like SQL. Although that said, Cypher relies on ASCII art to MATCH nodes wrapped in round brackets and edges (relationships) wrapped in square brackets with arrows –>  indicating the direction of relationships:

MATCH (node1)-[rel:TYPE]->(node2)
RETURN rel.property

which is pretty un-SQL-like!

Graph databases goes on to describe implementing an application using Neo4j. The example code in the book is in Java but there appears, in py2neo, to be a relatively mature Python client. The situation here seems to be in flux since searching the web brings up references to an older python-embedded library which is now deprecated. The book pre-dates Neo4j 2.0 which introduced some significant changes.

The book finishes with some examples from the real world and some demonstrations of popular graph theory analysis. I liked the real world examples of a social recommendation system, access control and parcel routing. The coverage of graph theory analysis was rather brief, and didn’t explicit use Cypher which would have made the presentation different from what you find in the usual graph theory textbooks.

Overall I have mixed feelings about this book: the introduction and overview sections are good, as is the part on Neo4j internals. It’s a rather slim volume, feels a bit disjointed and is not up to date with Ne04j 2.0 which has significant new functionality.  Perhaps this is not the arena for a dead-tree publication – the Neo4j website has a comprehensive set of reference and tutorial material, and if you are happy with a purely electronic version than you can get Graph Databases for free (here).

]]>
https://blog.scraperwiki.com/2015/01/book-review-graph-databases-by-ian-robinson-jim-webber-and-emil-eifrem/feed/ 2 758222475
Book review: Linked by Albert-László Barabási https://blog.scraperwiki.com/2014/11/book-review-linked-by-albert-lszl-barabsi/ https://blog.scraperwiki.com/2014/11/book-review-linked-by-albert-lszl-barabsi/#comments Thu, 27 Nov 2014 11:51:11 +0000 https://blog.scraperwiki.com/?p=758222459 linkedI am on a bit of a graph theory binge, it started with an attempt to learn about Gephi, the graph visualisation software, which developed into reading a proper grown up book on graph theory. I then learnt a little more about practicalities on reading Seven Databases in Seven Weeks, which included a section on Neo4J – a graph database. Now I move on to Linked by Albert-László Barabási, this is a popular account of the rise of the analysis of complex networks in the late nineties. A short subtitle used on earlier editions was “The New Science of Networks”. The rather lengthy subtitle on this edition is “How Everything Is Connected to Everything Else and What It Means for Business, Science, and Everyday Life”.

In mathematical terms a graph is an abstract collection of nodes linked by edges. My social network is a graph comprised of people, the nodes, and their interactions such as friendships, which are the edges. The internet is a graph, comprising routers at the nodes and the links between them are edges. “Network” is a less formal term often used synonymously with graph, “complex” is more a matter of taste but it implies large and with a structure which cannot be trivially described i.e. each node has four edges is not a complex network.
The models used for the complex networks discussed in this book are the descendants of the random networks first constructed by Erdős and Rényi. They imagined a simple scheme whereby nodes in a network were randomly connected with some fixed probability. This generates a particular type of random network which do not replicate real-world networks such as social networks or the internet. The innovations introduced by Barabási and others are in the measurement of real world networks and new methods of construction which produce small-world and scale-free network models. Small-world networks are characterised by clusters of tightly interconnected nodes with a few links between those clusters, they describe social networks. Scale-free networks contain nodes with any number of connections but where nodes with larger numbers of connections are less common than those with a small number. For example on the web there are many web pages (nodes) with a few links (edges) but there exist some web pages with thousands and thousands of links, and all values in between.
I’ve long been aware of Barabási’s work, dating back to my time as an academic where I worked in the area of soft condensed matter. The study of complex networks was becoming a thing at the time, and of all the areas of physics soft condensed matter was closest to it. Barabási’s work was one of the sparks that set the area going. The connection with physics is around so-called power laws which are found in a wide range of physical systems. The networks that Barabási is so interested in show power law behaviour in the number of connections a node has. This has implications for a wide range of properties of the system such as robustness to the removal of nodes, transport properties and so forth. The book starts with some historical vignettes on the origins of graph theory, with Euler and the bridges of Königsberg problem. It then goes on to discuss various complex networks with some coverage of the origins of their study and the work that Barabási has done in the area. As such it is a pretty personal review. Barabási also recounts some of the history of “six degrees of separation”, the idea that everyone is linked to everyone else by only six links. This idea had its traceable origins back in the early years of the 20th century in Budapest.
Graph theory has been around for a long while, and the study of random networks for 50 years or so. Why the sudden surge in interest? It boils down to a couple of factors, the first is the internet which provides a complex network of physical connections on which a further complex network of connections sit in the form of the web. The graph structure of this infrastructure is relatively easy to explore using automatic tools, you can build a map of millions of nodes with relative ease compared to networks in the “real” world. Furthermore, this complex network infrastructure and the rise of automated experiments has improved our ability to explore and disseminate information on physical networks. For example, the network of chemical interactions in a cell, the network of actors in movies, our social interactions, the spread of disease and so forth. In the past getting such detailed information on large networks was tiresome and the distribution mechanisms for such data slow and inconvenient.
For a book written a few short years ago, Linked can feel strangely dated. It discusses Apple’s failure in the handheld computing market with the Newton palm top device, and the success of Palm with their subsequent range. Names of long forgotten internet companies float by, although even at the time of writing Google was beginning its dominance.
If you are new to graph theory and want an unchallenging introduction then Linked is a good place to start. It’s readable and has a whole load of interesting examples of scale free networks in the wild. Whilst not the whole of graph theory, this is where interesting new things are happening.
]]>
https://blog.scraperwiki.com/2014/11/book-review-linked-by-albert-lszl-barabsi/feed/ 1 758222459
Book review: Seven databases in Seven Weeks by Eric Redmond and Jim R. Wilson https://blog.scraperwiki.com/2014/11/book-review-seven-databases-in-seven-weeks-by-eric-redmond-and-jim-r-wilson/ https://blog.scraperwiki.com/2014/11/book-review-seven-databases-in-seven-weeks-by-eric-redmond-and-jim-r-wilson/#comments Wed, 12 Nov 2014 10:36:45 +0000 https://blog.scraperwiki.com/?p=758222445 sevendatabasesI came to databases a little late in life, as a physical scientist I didn’t have much call for them. Then a few years ago I discovered the wonders of relational databases and the power of SQL. The ScraperWiki platform strongly encourages you to save data to SQLite databases to integrate with its tools.

There is life beyond SQL databases much of it evolved in the last few years. I wanted to learn more and a plea on twitter quickly brought me a recommendation for Seven databases in Seven Weeks by Eric Redmond and Jim R. Wilson.

The book covers the key classes of database starting with relational databases in the form of PostgreSQL. It then goes on to look at six further databases in the so-called NoSQL family – all relatively new compared to venerable relational databases. The six other databases fall into several classes: Riak and Redis are key-value stores, CouchDB and MongoDB are document databases, HBase is a columnar database and Neo4J is a graph database.

Relational databases are characterised by storage schemas involving multiple interlinked tables containing rows and columns, this layout is designed to minimise the repetition of data and to provide maximum query-ability. Key-value stores only store a key and a value in the manner of a dictionary but the “value” may be of a complex type. A value can be returned very fast given a key – this is the core strength of the key-value stores. The document stores MongoDB and CouchDB store JSON “documents” rather than rows. These documents can store information in nested hierarchies which don’t necessarily need to all have the same structure this allows maximum flexibility in the type of data to be stored but at the cost of ease of query.

HBase fits into the Hadoop ecosystem, the language used to describe it looks superficially like that used to describe tables in a relational database but this is a bit misleading. HBase is designed to work with massive quantities of data but not necessarily give the full querying flexibility of SQL. Neo4J is designed to store graph data – collections of nodes and edges and comes with a query language particularly suited to querying (or walking) data so arranged. This seems very similar to triplestores and the SPARQL – used in semantic web technologies.

Relational databases are designed to give you ACID (Atomicity, Consistency, Isolation, Durability), essentially you shouldn’t be able to introduce inconsistent changes to the database and it should always give you the same answer to the same query. The NoSQL databases described here have a subtly different core goal. Most of them are designed to work on the web and address CAP (Consistency, Availability, Partition), indeed several of them offer native REST interfaces over HTTP which means they are very straightforward to integrate into web applications. CAP refers to the ability to return a consistent answer, from any instance of the database, in the face of network (or partition) problems. This assumes that these databases may be stored in multiple locations on the web. A famous theorem contends that you can have any two of Consistency, Availability and Partition resistance at any one time but not all three together.

NoSQL databases are variously designed to scale horizontally and vertically. Horizontal scaling means replicating the same database in multiple places to provide greater capacity to serve requests even with network connectivity problems. Vertically scaling by “sharding” provides the ability to store more data by fragmenting the data such that some items are stored on one server and some on another.

I’m not a SQL expert by any means but it’s telling that I learnt a huge amount about PostgreSQL in the forty or so pages on the database. I think this is because the focus was not on the SQL query language but rather on the infrastructure that PostgreSQL provides. For example, it discusses triggers, rules, plugins and specialised indexing for text search. I assume this style of coverage applies to the other databases. This book is not about the nitty-gritty of querying particular database types but rather about the different database systems.

The NoSQL databases generally support MapReduce style queries this is a scheme most closely associated with Big Data and the Hadoop ecosystem but in this instance it is more a framework for doing queries which maybe executed across a cluster of computers.

I’m on a bit of a graph theory binge at the moment so Neo4J was the most interesting to me.

As an older data scientist I have a certain fondness for things that have been around for a while, like FORTRAN and SQL databases, I’ve looked with some disdain at these newfangled NoSQL things. To a degree this book has converted me, at least to the point where I look at ScraperWiki projects and think – “It might be better to use a * database for this piece of work”.

This is an excellent book which was pitched at just the right level for my purposes, I’ll be looking for more Pragmatic Programmers books in future.

]]>
https://blog.scraperwiki.com/2014/11/book-review-seven-databases-in-seven-weeks-by-eric-redmond-and-jim-r-wilson/feed/ 2 758222445