Tag: scraperwiki

Book review: Linked by Albert-László Barabási

This review was first posted at ScraperWiki.
linkedI am on a bit of a graph theory binge, it started with an attempt to learn about Gephi, the graph visualisation software, which developed into reading a proper grown up book on graph theory. I then learnt a little more about practicalities on reading Seven Databases in Seven Weeks, which included a section on Neo4J – a graph database. Now I move on to Linked by Albert-László Barabási, this is a popular account of the rise of the analysis of complex networks in the late nineties. A short subtitle used on earlier editions was “The New Science of Networks”. The rather lengthy subtitle on this edition is “How Everything Is Connected to Everything Else and What It Means for Business, Science, and Everyday Life”.

In mathematical terms a graph is an abstract collection of nodes linked by edges. My social network is a graph comprised of people, the nodes, and their interactions such as friendships, which are the edges. The internet is a graph, comprising routers at the nodes and the links between them are edges. “Network” is a less formal term often used synonymously with graph, “complex” is more a matter of taste but it implies large and with a structure which cannot be trivially described i.e. each node has four edges is not a complex network.
The models used for the complex networks discussed in this book are the descendants of the random networks first constructed by Erdős and Rényi. They imagined a simple scheme whereby nodes in a network were randomly connected with some fixed probability. This generates a particular type of random network which do not replicate real-world networks such as social networks or the internet. The innovations introduced by Barabási and others are in the measurement of real world networks and new methods of construction which produce small-world and scale-free network models. Small-world networks are characterised by clusters of tightly interconnected nodes with a few links between those clusters, they describe social networks. Scale-free networks contain nodes with any number of connections but where nodes with larger numbers of connections are less common than those with a small number. For example on the web there are many web pages (nodes) with a few links (edges) but there exist some web pages with thousands and thousands of links, and all values in between.
I’ve long been aware of Barabási’s work, dating back to my time as an academic where I worked in the area of soft condensed matter. The study of complex networks was becoming a thing at the time, and of all the areas of physics soft condensed matter was closest to it. Barabási’s work was one of the sparks that set the area going. The connection with physics is around so-called power laws which are found in a wide range of physical systems. The networks that Barabási is so interested in show power law behaviour in the number of connections a node has. This has implications for a wide range of properties of the system such as robustness to the removal of nodes, transport properties and so forth. The book starts with some historical vignettes on the origins of graph theory, with Euler and the bridges of Königsberg problem. It then goes on to discuss various complex networks with some coverage of the origins of their study and the work that Barabási has done in the area. As such it is a pretty personal review. Barabási also recounts some of the history of “six degrees of separation”, the idea that everyone is linked to everyone else by only six links. This idea had its traceable origins back in the early years of the 20th century in Budapest.
Graph theory has been around for a long while, and the study of random networks for 50 years or so. Why the sudden surge in interest? It boils down to a couple of factors, the first is the internet which provides a complex network of physical connections on which a further complex network of connections sit in the form of the web. The graph structure of this infrastructure is relatively easy to explore using automatic tools, you can build a map of millions of nodes with relative ease compared to networks in the “real” world. Furthermore, this complex network infrastructure and the rise of automated experiments has improved our ability to explore and disseminate information on physical networks. For example, the network of chemical interactions in a cell, the network of actors in movies, our social interactions, the spread of disease and so forth. In the past getting such detailed information on large networks was tiresome and the distribution mechanisms for such data slow and inconvenient.
For a book written a few short years ago, Linked can feel strangely dated. It discusses Apple’s failure in the handheld computing market with the Newton palm top device, and the success of Palm with their subsequent range. Names of long forgotten internet companies float by, although even at the time of writing Google was beginning its dominance.
If you are new to graph theory and want an unchallenging introduction then Linked is a good place to start. It’s readable and has a whole load of interesting examples of scale free networks in the wild. Whilst not the whole of graph theory, this is where interesting new things are happening.

Book review: Seven databases in Seven Weeks by Eric Redmond and Jim R. Wilson

sevendatabases

This review was first published at Scraperwiki.

I came to databases a little late in life, as a physical scientist I didn’t have much call for them. Then a few years ago I discovered the wonders of relational databases and the power of SQL. The ScraperWiki platform strongly encourages you to save data to SQLite databases to integrate with its tools.

There is life beyond SQL databases much of it evolved in the last few years. I wanted to learn more and a plea on twitter quickly brought me a recommendation for Seven databases in Seven Weeks by Eric Redmond and Jim R. Wilson.

The book covers the key classes of database starting with relational databases in the form of PostgreSQL. It then goes on to look at six further databases in the so-called NoSQL family – all relatively new compared to venerable relational databases. The six other databases fall into several classes: Riak and Redis are key-value stores, CouchDB and MongoDB are document databases, HBase is a columnar database and Neo4J is a graph database.

Relational databases are characterised by storage schemas involving multiple interlinked tables containing rows and columns, this layout is designed to minimise the repetition of data and to provide maximum query-ability. Key-value stores only store a key and a value in the manner of a dictionary but the “value” may be of a complex type. A value can be returned very fast given a key – this is the core strength of the key-value stores. The document stores MongoDB and CouchDB store JSON “documents” rather than rows. These documents can store information in nested hierarchies which don’t necessarily need to all have the same structure this allows maximum flexibility in the type of data to be stored but at the cost of ease of query.

HBase fits into the Hadoop ecosystem, the language used to describe it looks superficially like that used to describe tables in a relational database but this is a bit misleading. HBase is designed to work with massive quantities of data but not necessarily give the full querying flexibility of SQL. Neo4J is designed to store graph data – collections of nodes and edges and comes with a query language particularly suited to querying (or walking) data so arranged. This seems very similar to triplestores and the SPARQL – used in semantic web technologies.

Relational databases are designed to give you ACID (Atomicity, Consistency, Isolation, Durability), essentially you shouldn’t be able to introduce inconsistent changes to the database and it should always give you the same answer to the same query. The NoSQL databases described here have a subtly different core goal. Most of them are designed to work on the web and address CAP (Consistency, Availability, Partition), indeed several of them offer native REST interfaces over HTTP which means they are very straightforward to integrate into web applications. CAP refers to the ability to return a consistent answer, from any instance of the database, in the face of network (or partition) problems. This assumes that these databases may be stored in multiple locations on the web. A famous theorem contends that you can have any two of Consistency, Availability and Partition resistance at any one time but not all three together.

NoSQL databases are variously designed to scale horizontally and vertically. Horizontal scaling means replicating the same database in multiple places to provide greater capacity to serve requests even with network connectivity problems. Vertically scaling by “sharding” provides the ability to store more data by fragmenting the data such that some items are stored on one server and some on another.

I’m not a SQL expert by any means but it’s telling that I learnt a huge amount about PostgreSQL in the forty or so pages on the database. I think this is because the focus was not on the SQL query language but rather on the infrastructure that PostgreSQL provides. For example, it discusses triggers, rules, plugins and specialised indexing for text search. I assume this style of coverage applies to the other databases. This book is not about the nitty-gritty of querying particular database types but rather about the different database systems.

The NoSQL databases generally support MapReduce style queries this is a scheme most closely associated with Big Data and the Hadoop ecosystem but in this instance it is more a framework for doing queries which maybe executed across a cluster of computers.

I’m on a bit of a graph theory binge at the moment so Neo4J was the most interesting to me.

As an older data scientist I have a certain fondness for things that have been around for a while, like FORTRAN and SQL databases, I’ve looked with some disdain at these newfangled NoSQL things. To a degree this book has converted me, at least to the point where I look at ScraperWiki projects and think – “It might be better to use a * database for this piece of work”.

This is an excellent book which was pitched at just the right level for my purposes, I’ll be looking for more Pragmatic Programmers books in future.

Exploring the ONS

This post was first published at ScraperWiki.

The Office for National Statistics (ONS) is the United Kingdom statistical body charged by the government with the task of collecting and publishing  statistics related to the economy, population and society of England and Wales at national, regional and local levels. The data is typically published in the form of Excel spreadsheets.

The ONS is working on opening up their data, and making it more accessible to users. We’ve been doing a bit of work to help with that. This is typical of a number of jobs we have done. A customer has a website containing content which they want to move/process/republish elsewhere. The current website might have been built by aggregation over a number of years, and the underlying structure of the Content Management System may not be available to them. In these circumstances making a survey of the pre-existing content is an obvious first step.

The index for the ONS reference tables and datasets can be found here. Each dataset has a title, a release date, and the type of dataset. There is also a URL to the dataset inside the title field there is an indication of the size of the file. We wrote a simple scraper to collect these pieces of information.

First up, we’ll looking at the topics of the data released. There are a couple of routes into discovering these, one is to read the titles, this is OK as an approach but the titles are quite wordy and sometimes it isn’t clear what they refer to. An alternative, in this case, is to look at the URLs to the documents.They look something like this:

http://www.ons.gov.uk/ons/rel/lms/labour-market-statistics/november-2013/table-unem03.xls

This can be quite revealing since even if the website is not explicit about its structure the URLs can reveal the structure the builder used. We process the URLs by splitting them at the backslashes. The first part http://www.ons.gov.uk/ons/rel is common to all the URLs. Subsequent parts we can use to define a hierarchy. In this case we will focus on the fourth part of the hierarchy – “labour-market-statistics” in this instance, this gives us a human readable description of a topic. There are approximately 400 topics as defined by this metric as opposed to 90 or so defined by the third level of the hierarchy. Using the fourth level of the hierarchy key areas of the website by numbers of documents are:

  • Labour market statistics
  • National population projections
  • Family spending
  • Subnational labour market statistics
  • Census
  • Annual survey of hours and earnings.

We can visualise this as a treemap, here I am simply showing the top 20 areas by number of documents:

ONS-treemap-(simple)-v2

These 20 topics cover approximately the two thirds of the total number of documents.

We can identify file types using the file extension in the URL, this approach needs to be used a little cautiously since sometimes the extension doesn’t match the file type. Most of the files are Excel spreadsheets although there are a few CSV and zip files, the zip files containing Excel spreadsheets. CSV appears to have been used for some of the older datasets. Most of the files are pretty small, less than 290kb but there are a few up to much larger sizes.

Finally we can look at the release dates for the datasets. There are datasets from as far back as 1988, in fact the data set released in 1988 actually refers to data from 1984. There are some data released regularly from about 2001 but from 2011 a wider range of data has been released on a regular basis. We can see the monthly pattern of data releases in this timeline for 2014 which is restricted to the top 20 topics identified above:

Timeline-(detail)-v2

This shows the big releases of labour market statistics, both national and regional, on the third Wednesday of each month. Other monthly releases include retail sales and producer prices data. And every week provisional figures on the registration of deaths in England and Wales are reported.

You can explore these data yourself using the Tableau workbook here.

The actual content of these spreadsheets is another story.

This survey approach to a website is handy for a range of applications, and the techniques used are quite general. We’ve used similar approaches to understand government and newspaper websites.

Book review: Graph Theory and Complex Networks by Maarten van Steen

graph_theory

This review was first published at ScraperWiki.

My last read, on the Gephi graph visualisation package, was a little disappointing but gave me an enthusiasm for Graph Theory. So I picked up one of the books that it recommended: Graph Theory and Complex Networks: An Introduction by Maarten van Steen to learn more. In this context a graph is a collection of vertices connected by edges, the edges may be directed or undirected. The road network is an example of a graph; the junctions between roads are vertices, the edges are roads and a one way street is a directed edge – two-way streets are undirected.

Why study graph theory?

Graph theory underpins a bunch of things like route finding, timetabling, map colouring, communications routing, sol-gel transitions, ecologies, parsing mathematical expressions and so forth. It’s been a staple of Computer Science undergraduate courses for a while, and more recently there’s been something of a resurgence in the field with systems on the web provided huge quantities of graph-shaped data both in terms of the underlying hardware networks and the activities of people – the social networks.

Sometimes the links between graph theory and an application are not so obvious. For example, project planning can be understood in terms of graph theory. A task can depend on another task – the tasks being two vertices in a graph. The edge between such vertices is directed, from one to the other, indicating dependency. To give a trivial example: you need a chicken to lay an egg. As a whole a graph of tasks cannot contain loops (or cycles) since this would imply that a task depended on a task that could only be completed after it, itself had been completed. To return to my example: if you need an egg in order to get a chicken to lay an egg then you’re in trouble! Generally, networks of tasks should be directed acyclic graphs (or DAG) i.e. they should not contain cycles.

The book’s target audience is 1st or 2nd year undergraduates with moderate background in mathematics, it was developed for Computer Science undergraduates. The style is quite mathematical but fairly friendly. The author’s intention is to introduce the undergraduate to mathematical formalism. I found this useful, since mathematical symbols are difficult to search for and shorthands such  as operator overloading even more so. This said, it is still an undergraduate text rather than a popular accounts don’t expect an easy read or pretty pictures, or even pretty illustrations.

The book divides into three chunks. The first provides the basic language for describing graphs, both words and equations. The second part covers theorems arising from some of the basic definitions, including the ideas of “walks” – traversals of a graph which take in all vertices and “tours” which take in all edges. This includes long standing problems such as the Dijkstra’s algorithm for route finding, and the travelling salesman problem. Also included in this section are “trees” – networks with no cycles – where is a cycle is a closed walk which visits vertices just once.

The third section covers the analysis of graphs. This starts with metrics for measuring graphs such as vertex degree distributions, distance statistics and clustering measures. I found this section rather brief, and poorly illustrated. However, it is followed by an introduction to various classes of complex networks including the original random graphs(connect), small-world and scale-free networks. What is stuck me about complex graphs is that they are each complex in their own way. Random, small-world and scale-free networks are all methods for constructing a network in order to try to represent a known real world situation. Small-world networks arise from one of Stanley Milgram’s experiments: sending post across the US via social networks. The key feature is that there are clusters of people who know each other but these clusters are linked by the odd “longer range” contact.

The book finishes with some real world examples relating to the world wide web, peer-to-peer sharing algorithms and social networks. What struck me in social networks is that the vertices (people!) you identify as important can depend quite sensitively on the metric you use to measure importance.

I picked up Graph Theory after I’d been working with Gephi, wanting to learn more about the things that Gephi will measure for me. It serves that purpose pretty well. In addition I have a better feel for situations where the answer is “graph theory”. Furthermore, Gephi has a bunch of network generators to create random, small-world and scale-free networks so that you can try out what you’ve learned.

Book review: Network Graph Analysis and visualization with Gephi by Ken Cherven

network_gephi

This review was first published at ScraperWiki.

I generally follow the rule that if I haven’t got anything nice to say about something then I shouldn’t say anything at all. Network Graph Analysis and visualization with Gephi by Ken Cherven challenges this principle.

Gephi is a system for producing network visualisations, as such it doesn’t have a great many competitors. Fans of Unix will have used Graphviz for this purpose in the past but Gephi offers greater flexibility in a more user-friendly package. Graph theory and network analysis have been growing in importance over the past few years in part because of developments in the analysis of various complex systems using network science. As a physical scientist I’ve been aware of this trend, and it clearly also holds in the social sciences. Furthermore there is much increased availability of network information from social media such as Twitter and Facebook.

I’ve used Gephi a few times in the past, and to be honest there has been an air of desperate button clicking to my activities. That’s to say I felt Gephi could provide the desired output but I could only achieve it by accident. I have an old-fashioned enthusiasm for books even for learning about modern technology. Hence Network Graph Analysis and visualization with Gephi – the only book I could find with Gephi in the title. There is substantial online material to support Gephi but I hoped that this book would give me a better insight into how Gephi worked and some wider understand of graph theory and network analysis.

On the positive side I now have a good understanding of the superficial side of the interface, a feel for how a more expert user thinks about Gephi, and some tricks to try.

I discovered from Network Graph Analysis that the “Overview” view in Gephi is what you might call “Draft”, a place to prepare visualisations which allows detailed interaction. And the “Preview” view is what you might call “Production”, a place where you make a final, beautiful version of your visualisations.

The workflow for Gephi is to import data and then build a visualisation using one of a wide range of layout algorithms. For example, force-based layouts assume varying forces between nodes for which an arrangement of nodes can be calculated by carrying out a pseudo-physical simulations. These algorithms can take a while to converge, and may get trapped in local minima. The effect of these layout algorithms is to reveal some features of the network. For example, the force layouts can reveal clusters of nodes which might also be discovered by a more conventional statistical clustering algorithm. The concentric layout allows a clearer visualisation of hierarchy in a network.

It’s clear that the plugin ecosystem is important to the more experienced user of Gephi. Plugins provide layout algorithms, data helpers, new import and export functionality, analysis and so forth. You can explore them in the Gephi marketplace.

Cherven recommends a fairly small, apparently well-chosen set of references to online resources and books. The Visual Complexity website looks fabulous. You can read the author’s complete, pre-publication draft of Networks, Crowds and Markets: Reasoning about a highly connected world by David Easley and Jon Kleinberg here. It looks good but it’s nearly 800 pages! I’ve opted for the rather shorter Graph Theory and Complex Networks: An Introduction by Maarten van Steen.

On the less positive side, this is an exceedingly short book. I read it in a couple of 40 minute train journeys. It’s padded with detailed descriptions of how to install Gephi and plugins, including lots of screenshots. The coverage is superficial, so whilst features may be introduced the explanation often tails off into “…and you can explore this feature on your own”.

Network Graph Analysis is disappointing, it does bring a little enlightenment to a new user of Gephi but not very much. A better book would have provided an introduction to network and graph analysis with Gephi the tool to provide practical experience and examples, in the manner that Data Mining does for weka and Natural Language Processing with Python does for the nltk library.

This book may be suitable for someone who is thinking about using Gephi and isn’t very confident about getting started. The best alternative that I’ve found is the online material on GitHub (here).