Tag: data science

Book review: Graph Databases by Ian Robinson, Jim Webber and Emil Eifrem

graphdatabases

This review was first posted at ScraperWiki.

Regular readers will know I am on a bit of a graph binge at the moment. In computer science and mathematics graphs are collections of nodes joined by edges, they have all sorts of applications including the study of social networks and route finding. Having covered graph theory and visualisation, I now move on to graph databases. I started on this path with Seven Databases in Seven Weeks which introduces the Neo4j graph database.

And so to Graph Databases by Ian Robinson, Jim Webber and Emil Eifrem which, despite its general title, is really a book about Neo4j. This is no big deal since Neo4j is the leading open source graph database.

This is not just random reading, we’re working on an EU project, NewsReader, which makes significant use of RDF – a type of graph-shaped data. We’re also working on a project for a customer which involves traversing a hierarchy of several thousand nodes. This leads to some rather convoluted joining operations when done on a SQL database, a graph database might be better suited to the problem.

The book starts with some definitions, identifying the types of graph database (property graph, hypergraph, RDF). Neo4j uses property graphs where nodes and edges are distinct items and each can hold properties. In contrast RDF graphs are expressed as triples which encompass both edges and nodes. In hypergraphs multiple edges can be expressed as a single item. A second set of definitions are regarding the types of graph processing system: graph databases and graph analytical engines. Neo4j is designed to provide good performance for database-like queries, acting as a backing store for a web application rather than an analytical engine to carry out offline calculations. There’s also an Appendix comparing NoSQL databases which feels like it should be part of the introduction.

A key feature of native graph databases, such as Neo4j, is “index-free adjacency”. The authors don’t seem to define this well early in the book but later on whilst discussing the internals of Neo4j it is all made clear: nodes and edges are stored as fixed length records with references to a list of nodes to which they are connected. This means its very fast to visit a node, and then iterate over all of its attached neighbours. The alternative index-based lookups may involve scanning a whole table to find all links to a particular node. It is in the area of traversing networks that Neo4j shines in performance terms compared to SQL.

As Robinson et al emphasise in motivating the use of graph databases: Other types of NoSQL database and SQL databases are not built fundamentally around the idea of relationships between data except in quite a constrained sense. For SQL databases there is an overhead to carrying out join queries which are SQLs way of introducing relationships. As I hinted earlier storing hierarchies in SQL databases leads to some nasty looking, slow queries. In practice SQL databases are denormalised for performance reasons to address these cases. Graph databases, on the other hand, are all about relationships.

Schema are an important concept in SQL databases, they are used to enforce constraints on a database i.e. “this thing must be a string” or “this thing must be in this set”. Neo4j describes itself as “schema optional”, the schema functionality seems relatively recently introduced and is not discussed in this book although it is alluded to. As someone with a small background in SQL the absence of schema in NoSQL databases is always the cause of some anxiety and distress.

A chapter on data modelling and the Cypher query language feels like the heart of the book. People say that Neo4j is “whiteboard friendly” in that if you can draw a relationship structure on a whiteboard then you can implement it in Neo4j without going through the rigmarole of making some normalised schema that doesn’t look like what you’ve drawn. This seems fair up to a point, your whiteboard scribbles do tend to be guided to a degree by what your target system is, and you can go wrong with your data model going from whiteboard to data model, even in Neo4j.

I imagine it is no accident that more recent query languages like Cypher and SPARQL look a bit like SQL. Although that said, Cypher relies on ASCII art to MATCH nodes wrapped in round brackets and edges (relationships) wrapped in square brackets with arrows –>  indicating the direction of relationships:

MATCH (node1)-[rel:TYPE]->(node2)
RETURN rel.property

which is pretty un-SQL-like!

Graph databases goes on to describe implementing an application using Neo4j. The example code in the book is in Java but there appears, in py2neo, to be a relatively mature Python client. The situation here seems to be in flux since searching the web brings up references to an older python-embedded library which is now deprecated. The book pre-dates Neo4j 2.0 which introduced some significant changes.

The book finishes with some examples from the real world and some demonstrations of popular graph theory analysis. I liked the real world examples of a social recommendation system, access control and parcel routing. The coverage of graph theory analysis was rather brief, and didn’t explicit use Cypher which would have made the presentation different from what you find in the usual graph theory textbooks.

Overall I have mixed feelings about this book: the introduction and overview sections are good, as is the part on Neo4j internals. It’s a rather slim volume, feels a bit disjointed and is not up to date with Ne04j 2.0 which has significant new functionality.  Perhaps this is not the arena for a dead-tree publication – the Neo4j website has a comprehensive set of reference and tutorial material, and if you are happy with a purely electronic version than you can get Graph Databases for free (here).

To infinity and beyond! Or how I replaced my hard disk with an SSD

Samsung840ProNearly two years ago I bought a Sony Vaio T13 laptop as my combined work and home main computer. It’s a nice piece of kit and has served me well. When I bought it I commented that I’d like to have had an SSD drive rather than the conventional 500GB “spinning rust” hybrid drive with which it came. At the time specifying a 512GB SSD from the outset was eyewateringly expensive.

Nearly 2 years later I finally got around to making the upgrade! And it was remarkably straightforward. Largely through laziness I went for the 512GB SSD drive I’d identified nearly two years ago – the Samsung 840 Pro, the price had dropped from something like £450 to £230. There are cheaper options, down to about £150 but I’m already saving money if the Pro started at £450* ;-)

The drive itself is rather insubstantial, turning up in a padded envelope that just doesn’t feel heavy enough. It comes with a CD containing Data Migration software and some wizard diagnostic software. The migration software clones your current hard drive to the new SSD. You need to get a SATA to USB adaptor, like this one, to do this. Cloning my drive took about 5 hours but I need to decrypt it first which was a 24 hour or so job – my drive was encrypted with Bitlocker.

With SSD containing contents of original drive cloned on to it in hand, all that is required is to open up laptop and swap the drives over. This turns out to be really easy on the Vaio: unscrew battery, unscrew drive compartment cover, unscrew drive cage from from laptop, remove old hard drive, remove drive cage from old drive, put drive cage on new drive and then repeat steps to remove old drive in reverse to install new drive.

A set of dinky screw drivers is handy and it would have helped if I’d realised the drive cage was screwed to the laptop frame before I started prising at it with the big screwdriver but no harm done.

I actually found replacing the hard drive on my laptop easy than replacing or adding a drive to a desktop. Whenever I’ve added a hard drive to a desktop there has been cursing and skinned knuckles in removing/adding the power connector and unseemly cowboying of the drive into some ill fitting drive cage using left over grub screws. Compared to this working on the Vaio was a joy. You might want to test the “lie of the land” on your own model of laptop in terms of accessibility to the drive. My suspicion is that it will be generally straightforward since laptops often have the drive size as an option.

The moment of truth is rebooting after installation – this Just WorkedTM  which was a relief. First hints of improved performance were in re-encrypting the hard drive. Decrypting the conventional drive the peak IO transfer rate was about 30MB/s whilst with the SSD drive the peak was around 150MB/s. Opening up Microsoft Office applications is much snappier, as is opening Sublime Text. I should probably have a go at uploading a multi-gigabyte CSV file to MySQL, which I know is heavily IO bound but I can’t be bothered. All in all my laptop just feels rather more responsive.

I played a bit with the supplied diagnostic wizard software but didn’t think much of it, so promptly uninstalled it.

Overall: much more straightforward and less scary than I anticipated – I recommend this approach to anyone with a laptop to refresh and a modicum of courage.

*This logic brought to you by Mrs SomeBeans and her Yamaha Thundercat purchase!

NewsReader – the developers story

newsreader

This post was first published at ScraperWiki.

ScraperWiki has been a partner in NewsReader, an EU Framework 7 research project, for the last couple of years. The aim of NewsReader is to give computers the power to “understand” the news; to extract from a myriad of news articles the underlying events which gave rise to those articles; the who, the where, the why and the what of those events. The project is comprised of academic researchers specialising in computational linguistics (VUA in Amsterdam, EHU in the Basque Country and FBK in Trento), Lexis Nexis – a major news aggregator, and a couple of small technology companies: ourselves at ScraperWiki and SynerScope – a Dutch startup specialising in the visualisation of complex networks.

Our role at ScraperWiki is in providing mechanisms to enable developers to exploit the NewsReader technology, and to feed news into the system. As part of this work we have developed a simple REST API which gives access to the KnowledgeStore, the system which underpins NewsReader. The native query language of the KnowledgeStore is SPARQL – the query language of the semantic web. The Simple API provides a set of predefined queries which are easier for end users to work with than raw SPARQL, and help us as service managers by providing a predictable set of optimised queries. If you want to know more technical detail then we’ve written a paper about it (here).

The Simple API has seen live action at a Hack Day on World Cup news which we held in London in the summer. Attendees were able to develop a range of applications which probed violence, money and corruption in the realm of the World Cup. I blogged about our previous Hack Day here and here. The Simple API, and the Hack Day helped us shake out some bugs and add features which will make it even better next time.

“Next time” is another Hack Day to be held in the Amsterdam on 21st January 2015, and London on the 30th January 2015. This time we have processed 6,000,000 articles relating to the car industry over the period 2005-2014. The motor industry is a trillion dollar a year business, so we can anticipate finding lots of valuable information in this horde.

From our previous experience the three things that NewsReader excels at are:

  1. Finding networks of interactions, identifying important players. For the World Cup Hack Day we at ScraperWiki were handicapped slightly by having no interest in football! But the NewsReader technology enabled us to quickly identify that “Sepp Blatter”, “Jack Warner” and “Mohammed bin Hammam” were important in world football. This is illustrated in this slightly cryptic visualisation made using Gephi:beckham_and_blatter
  2. Finding events of a particular type. the NewsReader technology carries out semantic role labeling: taking sentences and identifying what type of event is described in that sentence and what roles the participants took. This information is then aggregated and exposed using semantic web technology. In the World Cup Hack Day participants used this functionality to identify events involving violence, bribery, gambling, and other financial transactions;
  3. Establishing timelines. In the World Cup data we could track the events involving “Mohammed bin Hammam” through time and the type of events he was involved in. This enabled us to quickly navigate to pertinent news articles.Timeline

You can see fragments of code used to extract these data using the Simple API in these GitHub Gists (here and here), and dynamic visualisations illustrating these three features here and here.

The Simple API is up and running already, you can find it (here). It is self-documenting, simply visit the root URL and you’ll see query examples with optional and compulsory parameters. Be aware though: the Simple API is under active development, and the underlying data in the KnowledgeStore is being optimised for the Hack Days so it may not be available when you visit.

If you want to join our automotive Hack Day then you can sign up for the Amsterdam event (here) and the London event (here).

Book review: Linked by Albert-László Barabási

This review was first posted at ScraperWiki.
linkedI am on a bit of a graph theory binge, it started with an attempt to learn about Gephi, the graph visualisation software, which developed into reading a proper grown up book on graph theory. I then learnt a little more about practicalities on reading Seven Databases in Seven Weeks, which included a section on Neo4J – a graph database. Now I move on to Linked by Albert-László Barabási, this is a popular account of the rise of the analysis of complex networks in the late nineties. A short subtitle used on earlier editions was “The New Science of Networks”. The rather lengthy subtitle on this edition is “How Everything Is Connected to Everything Else and What It Means for Business, Science, and Everyday Life”.

In mathematical terms a graph is an abstract collection of nodes linked by edges. My social network is a graph comprised of people, the nodes, and their interactions such as friendships, which are the edges. The internet is a graph, comprising routers at the nodes and the links between them are edges. “Network” is a less formal term often used synonymously with graph, “complex” is more a matter of taste but it implies large and with a structure which cannot be trivially described i.e. each node has four edges is not a complex network.
The models used for the complex networks discussed in this book are the descendants of the random networks first constructed by Erdős and Rényi. They imagined a simple scheme whereby nodes in a network were randomly connected with some fixed probability. This generates a particular type of random network which do not replicate real-world networks such as social networks or the internet. The innovations introduced by Barabási and others are in the measurement of real world networks and new methods of construction which produce small-world and scale-free network models. Small-world networks are characterised by clusters of tightly interconnected nodes with a few links between those clusters, they describe social networks. Scale-free networks contain nodes with any number of connections but where nodes with larger numbers of connections are less common than those with a small number. For example on the web there are many web pages (nodes) with a few links (edges) but there exist some web pages with thousands and thousands of links, and all values in between.
I’ve long been aware of Barabási’s work, dating back to my time as an academic where I worked in the area of soft condensed matter. The study of complex networks was becoming a thing at the time, and of all the areas of physics soft condensed matter was closest to it. Barabási’s work was one of the sparks that set the area going. The connection with physics is around so-called power laws which are found in a wide range of physical systems. The networks that Barabási is so interested in show power law behaviour in the number of connections a node has. This has implications for a wide range of properties of the system such as robustness to the removal of nodes, transport properties and so forth. The book starts with some historical vignettes on the origins of graph theory, with Euler and the bridges of Königsberg problem. It then goes on to discuss various complex networks with some coverage of the origins of their study and the work that Barabási has done in the area. As such it is a pretty personal review. Barabási also recounts some of the history of “six degrees of separation”, the idea that everyone is linked to everyone else by only six links. This idea had its traceable origins back in the early years of the 20th century in Budapest.
Graph theory has been around for a long while, and the study of random networks for 50 years or so. Why the sudden surge in interest? It boils down to a couple of factors, the first is the internet which provides a complex network of physical connections on which a further complex network of connections sit in the form of the web. The graph structure of this infrastructure is relatively easy to explore using automatic tools, you can build a map of millions of nodes with relative ease compared to networks in the “real” world. Furthermore, this complex network infrastructure and the rise of automated experiments has improved our ability to explore and disseminate information on physical networks. For example, the network of chemical interactions in a cell, the network of actors in movies, our social interactions, the spread of disease and so forth. In the past getting such detailed information on large networks was tiresome and the distribution mechanisms for such data slow and inconvenient.
For a book written a few short years ago, Linked can feel strangely dated. It discusses Apple’s failure in the handheld computing market with the Newton palm top device, and the success of Palm with their subsequent range. Names of long forgotten internet companies float by, although even at the time of writing Google was beginning its dominance.
If you are new to graph theory and want an unchallenging introduction then Linked is a good place to start. It’s readable and has a whole load of interesting examples of scale free networks in the wild. Whilst not the whole of graph theory, this is where interesting new things are happening.

Book review: Seven databases in Seven Weeks by Eric Redmond and Jim R. Wilson

sevendatabases

This review was first published at Scraperwiki.

I came to databases a little late in life, as a physical scientist I didn’t have much call for them. Then a few years ago I discovered the wonders of relational databases and the power of SQL. The ScraperWiki platform strongly encourages you to save data to SQLite databases to integrate with its tools.

There is life beyond SQL databases much of it evolved in the last few years. I wanted to learn more and a plea on twitter quickly brought me a recommendation for Seven databases in Seven Weeks by Eric Redmond and Jim R. Wilson.

The book covers the key classes of database starting with relational databases in the form of PostgreSQL. It then goes on to look at six further databases in the so-called NoSQL family – all relatively new compared to venerable relational databases. The six other databases fall into several classes: Riak and Redis are key-value stores, CouchDB and MongoDB are document databases, HBase is a columnar database and Neo4J is a graph database.

Relational databases are characterised by storage schemas involving multiple interlinked tables containing rows and columns, this layout is designed to minimise the repetition of data and to provide maximum query-ability. Key-value stores only store a key and a value in the manner of a dictionary but the “value” may be of a complex type. A value can be returned very fast given a key – this is the core strength of the key-value stores. The document stores MongoDB and CouchDB store JSON “documents” rather than rows. These documents can store information in nested hierarchies which don’t necessarily need to all have the same structure this allows maximum flexibility in the type of data to be stored but at the cost of ease of query.

HBase fits into the Hadoop ecosystem, the language used to describe it looks superficially like that used to describe tables in a relational database but this is a bit misleading. HBase is designed to work with massive quantities of data but not necessarily give the full querying flexibility of SQL. Neo4J is designed to store graph data – collections of nodes and edges and comes with a query language particularly suited to querying (or walking) data so arranged. This seems very similar to triplestores and the SPARQL – used in semantic web technologies.

Relational databases are designed to give you ACID (Atomicity, Consistency, Isolation, Durability), essentially you shouldn’t be able to introduce inconsistent changes to the database and it should always give you the same answer to the same query. The NoSQL databases described here have a subtly different core goal. Most of them are designed to work on the web and address CAP (Consistency, Availability, Partition), indeed several of them offer native REST interfaces over HTTP which means they are very straightforward to integrate into web applications. CAP refers to the ability to return a consistent answer, from any instance of the database, in the face of network (or partition) problems. This assumes that these databases may be stored in multiple locations on the web. A famous theorem contends that you can have any two of Consistency, Availability and Partition resistance at any one time but not all three together.

NoSQL databases are variously designed to scale horizontally and vertically. Horizontal scaling means replicating the same database in multiple places to provide greater capacity to serve requests even with network connectivity problems. Vertically scaling by “sharding” provides the ability to store more data by fragmenting the data such that some items are stored on one server and some on another.

I’m not a SQL expert by any means but it’s telling that I learnt a huge amount about PostgreSQL in the forty or so pages on the database. I think this is because the focus was not on the SQL query language but rather on the infrastructure that PostgreSQL provides. For example, it discusses triggers, rules, plugins and specialised indexing for text search. I assume this style of coverage applies to the other databases. This book is not about the nitty-gritty of querying particular database types but rather about the different database systems.

The NoSQL databases generally support MapReduce style queries this is a scheme most closely associated with Big Data and the Hadoop ecosystem but in this instance it is more a framework for doing queries which maybe executed across a cluster of computers.

I’m on a bit of a graph theory binge at the moment so Neo4J was the most interesting to me.

As an older data scientist I have a certain fondness for things that have been around for a while, like FORTRAN and SQL databases, I’ve looked with some disdain at these newfangled NoSQL things. To a degree this book has converted me, at least to the point where I look at ScraperWiki projects and think – “It might be better to use a * database for this piece of work”.

This is an excellent book which was pitched at just the right level for my purposes, I’ll be looking for more Pragmatic Programmers books in future.