Tag: scraperwiki

Visualising the London Underground with Tableau

This post was first published at ScraperWiki.

I’ve always thought of the London Underground as a sort of teleportation system. You enter a portal in one place, and with relatively little effort appeared at a portal in another place. Although in Star Trek our heroes entered a special room and stood well-separated on platforms, rather than packing themselves into metal tubes.

I read Christian Wolmar’s book, The Subterranean Railway about the history of the London Underground a while ago. At the time I wished for a visualisation for the growth of the network since the text description was a bit confusing. Fast forward a few months, and I find myself repeatedly in London wondering at the horror of the rush hour underground. How do I avoid being forced into some sort of human compression experiment?

Both of these questions can be answered with a little judicious visualisation!

First up, the history question. It turns out that other obsessives have already made a table containing a list of the opening dates for the London Underground. You can find it here, on wikipedia. These sortable tables are a little tricky to scrape, they can be copy-pasted into Excel but random blank rows appear. And the data used to control the sorting of the columns did confuse our Table Xtract tool, until I fixed it – just to solve my little problem! You can see the number of stations opened in each year in the chart below. It all started in 1863, electric trains were introduced in the very final years of the 19th century – leading to a burst of activity. Then things went quiet after the Second World War, when the car came to dominate transport.

Timeline2

Originally I had this chart coloured by underground line but this is rather misleading since the wikipedia table gives the line a station is currently on rather than the one it was originally built for. For example, Stanmore station opened in 1932 as part of the Metropolitan line, it was transferred to the Bakerloo line in 1939 and then to the Jubilee line in 1979. You can see the years in which lines opened here on wikipedia, where it becomes apparent that the name of an underground line is fluid.

So I have my station opening date data. How about station locations? Well, they too are available thanks to the work of folk at Openstreetmap, you can find that data here. Latitude-longitude coordinates are all very well but really we also need the connectivity, and what about Harry Beck’s iconic “circuit diagram” tube map? It turns out both of these issues can be addressed by digitizing station locations from the modern version of Beck’s map. I have to admit this was a slightly laborious process, I used ImageJ to manually extract coordinates.

I’ve shown the underground map coloured by the age of stations below.

Age map2

Deep reds for the oldest stations, on the Metropolitan and District lines built in the second half of the 19th century. Pale blue for middle aged stations, the Central line heading out to Epping and West Ruislip. And finally the most recent stations on the Jubilee line towards Canary Wharf and North Greenwich are a darker blue.

Next up is traffic, or how many people use the underground. The wikipedia page contains information on usage, in terms of millions of passengers per year in 2012 covering both entries and exits. I’ve shown this data below with traffic shown at individual stations by the thickness of the line.

Traffic

I rather like a “fat lines” presentation of the number of people using a station, the fatter the line at the station the more people going in and out. Of course some stations have multiple lines so get an unfair advantage. Correcting for this it turns out Canary Wharf is the busiest station on the underground, thankfully it’s built for it. Small above ground beneath it is a massive, cathedral-like space.

More data is available as a result of a Freedom of Information request (here) which gives data broken down by passenger action (boarding or alighting), underground line, direction of travel and time of day – broken down into fairly coarse chunks of the day. I use this data in the chart below to measure the “commuteriness” of each station. To do this I take the ratio of people boarding trains in the 7am-10am time slot with those boarding 4pm-7pm. For locations with lots of commuters, this will be a big number because lots of people get on the train to go to work in the morning but not many get on the train in the evening, that’s when everyone is getting off the train to go home.

CommuterRatios

By this measure the top five locations for “commuteriness” are:

  1. Pinner
  2. Ruislip Manor
  3. Elm Park
  4. Upminster Bridge
  5. Burnt Oak

It was difficult not to get sidetracked during this project, someone used the Freedom of Information Act to get the depths of all of the underground stations, so obviously I had to include that data too! The deepest underground station is Hampstead, in part because the station itself is at the top of a steep hill.

I’ve made all of this data into a Tableau visualisation which you can play with here. The interactive version shows you details of the stations as your cursor floats over them, allows you to select individual lines and change the data overlaid on the map including the depth and altitude data that.

Book review: The Signal and the Noise by Nate Silver

thesignalandthenoise

This review was first published at Scraperwiki.

Nate Silver first came to my attention during the 2008 presidential election in the US. He correctly predicted the outcome of the November results in 49 of 50 states, missing only on Indiana where Barack Obama won by just a single percentage point. This is part of a wider career in prediction: aside from a job at KPMG (which he found rather dull), he has been a professional poker player and run a baseball statistics website.

His book The Signal and the Noise: The Art and Science of Prediction looks at prediction in a range of fields: economics, disease, chess, baseball, poker, climate, weather, earthquakes and politics. It is both a review and a manifesto; a manifesto for making better prediction.

The opening chapter is on the catastrophic miscalculation of the default rate for collateralized debt obligations (CDO) which led in large part to the recent financial crash. The theme for this chapter is risk and uncertainty. In this instance, risk means the predicted default rate for these financial instruments and uncertainty means the uncertainty in those predictions. For CDOs the problem was that the estimates of risk were way off, and there was no recognition of the uncertainty in those risk estimates.

This theme of unrecognised uncertainty returns for the prediction of macroeconomic indicators such as GDP growth and unemployment. Here again forecasts are made, and practitioners in the field know these estimates are subject to considerable uncertainty, but the market for prediction, and simple human frailty mean that these uncertainties are ignored. I’ve written elsewhere on both the measurement and prediction of GDP. In brief, both sides of the equation are so fraught with uncertainty that you might as well toss a coin to predict GDP!

The psychology of prediction returns with a discussion of political punditry and the idea of “foxes” and “hedgehogs”. In this context “hedgehogs” are people with one Big Idea who make their predictions based on their Big Idea and are unshiftable from it. In the political arena the “Big Idea” may be ideology but as a scientist I can see that science can be afflicted in the same way. To a man with a hammer, everything is a nail. “Foxes”, on the other hand, are more eclectic and combine a range of data and ideas to predict, as a result they are more successful.

In some senses, the presidential prediction is a catching chickens in a coop exercise. There is quite a bit of data to use and your fellow political pundits are typically misled by their own prejudices, they’re “hedgehogs”, so all you, the “fox” needs to do is calculate in a fairly rational manner and you’re there. Silver returns to the idea of diversity in prediction, combining multiple models, or data of multiple types in his manifesto for better prediction.

There are several chapters on scientific prediction, looking at the predictions of earthquakes, the weather, climate and disease. There is not an overarching theme across these chapters. The point about earthquakes is that precise prediction about where, when and how big appears to be impossible. At short range, the weather is predictable but, beyond a few days, seasonal mean predictions and “the same as today” predictions are as good as the best computer simulations.

Other interesting points raised by this chapter are the ideas of judging predictions by accuracy, honesty and economic value. Honesty in this sense means “is this the best model I can make at this point?”. The interesting thing about weather forecasts in the US is that the National Weather Service makes the results of simulations available to all. Big national value-adders such as the Weather Channel produce a “wet bias” forecast which systematically overstates the possibility of rain for lower values of the chances of rain. This is because customers prefer to be told that there is, say, a 20% chance of rain when it actually turns out to be dry, than be told the actual chance of rain (say 5%) and for it to rain. This “wet bias” gives customers what they want.

Finally, there are predictions on games. On poker, baseball, basketball and chess. The benefits of these systems are the large amounts of data available. The US in particular has a thriving “gaming statistics” culture. The problems are closed, in the sense that there are a set of known rules to each game. And finally, there is ample opportunity for testing predictions against outcomes. This final factor is important in improving prediction.

In technical terms, Silver’s call to arms is for the Bayesian approach to predictions. With this approach, an attempt is made to incorporate prior beliefs into prediction through the use of an estimate of the prior probability and Bayes’ Theorem. Silver exemplifies this with a discussion of prediction in poker. Essentially, poker is about predicting your opponents’ hands and their subsequent actions based on those cards. To be successful, poker players must do this repeatedly and intuitively. However, luck still plays a large part in poker and only the very best players make a profit long term.

The book is structured around a series of interviews with workers in the field, and partly as a result of this it reads rather nicely – it’s not just a dry exposition of facts, but has some human elements too. Though it’s no replacement for a technical reference, it’s worth as a survey of prediction, as well as for the human interest stories.

Face ReKognition

G8Italy2009

This post was first published at ScraperWiki. The ReKognition API has now been withdrawn.

I’ve previously written about social media and the popularity of our Twitter Search and Followers tools. But how can we make Twitter data more useful to our customers? Analysing the profile pictures of Twitter accounts seemed like an interesting thing to do since they are often the faces of the account holder and a face can tell you a number of things about a person. Such as their gender, age and race. This type of demographic information is useful for marketing, and understanding who your product appeals to. It could also be a way of tying together public social media accounts since people like me use the same image across multiple accounts.

Compact digital cameras have offered face recognition for a while, and on my PC, Picasa churns through my photos identifying people in them. I’ve been doing image analysis for a long time, although never before on faces. My first effort at face recognition involved using the OpenCV library. OpenCV provides a whole suite of image analysis functions which do far more than just detect faces. However, getting it installed and working with the Python bindings on a PC was a bit fiddly, documentation was poor and the built-in face analysis capabilities were poor.

Fast forward a few months, and I spotted that someone had cast the ReKognition API over the images that the British Library had recently released, a dataset I’ve been poking around at too. The ReKognition API takes an image URL and a list of characteristics in which you are interested. These include, gender, race, age, emotion, whether or not you are wearing glasses or, oddly, whether you have your mouth open. Besides this summary information it returns a list of feature locations (i.e. locations in the image of eyes, mouth nose and so forth). It’s straightforward to use.

But who should be the first targets for my image analysis? Obviously, the ScraperWiki team! The pictures are quite small but ReKognition identified I was a “Happy, white, male, age 46 with no glasses on and my mouth shut”. Age 46 is a bit harsh – I’m actually 39 in my profile picture. A second target came out “Happy, Indian, male, age 24.7, with glasses on and mouth shut”. This was fairly accurate, Zarino was 25 when the photo was taken, he is male, has his glasses on but is not Indian. Two (male) members of the team, have still not forgiven ReKognition for describing them as female, particularly the one described as a 14 year old.

Fun as it was, this doesn’t really count as an evaluation of the technology. I investigated further by feeding in the photos of a whole load of famous people. The results of this are shown in the chart below. The horizontal axis is someone’s actual age, the vertical axis shows their age predicted by ReKognition. If the predictions were correct the points representing the celebrities would fall on the solid line. The dotted line shows a linear regression fit to the data. The equation of the line y = 0.673x (I constrained it to pass through zero) tells us that the age is consistently under-predicted by a third, or perhaps celebrities look younger than they really are! The R2 parameter tells us how good the fit is: a value of 0.7591 is not too bad.

ReKognitionFacePeopleChart

I also tried out ReKognition on a couple of class photos – taken at reunions, graduations and so forth. My thinking here being that I would get a cohort of people aged within a year of each other. These actually worked quite well; for older groups of people I got a standard deviation of only 5 years across a group of, typically, 10 people. A primary school class came out at 16+/-9 years, which wasn’t quite so good. I suspect the performance here is related to the fact that such group photos are taken relatively carefully and the lighting and setup for each face in the photo is, by its nature, the same.

Looking across these experiments: ReKognition is pretty good at finding faces in photos, and not find faces where there are none (about 90% accurate). It’s fairly good with gender (getting it right about 80% of the time, typically struggling a bit with younger children), it detects glasses pretty well. I don’t feel I tested it well on race. On age results are variable, for the ScraperWiki set the R^2 value for linear regression between actual and detected ages is about 0.5. Whilst for famous people it is about 0.75. In both cases it tends to under-estimate age and has never given an age above 55 despite being fed several more mature celebrities and grandparents. So on age, it definitely tells you something and under certain circumstances it can be quite accurate. Don’t forget the images we’re looking at are completely unconstrained, they’re not passport photos.

Finally, I applied face recognition to Twitter followers for the ScraperWiki account, and my personal account. The Summarise This Data tool on the ScraperWiki Platform provides a quick overview of the data added by face recognition.

face_recognition_data

It turns out that a little over 50% of the followers of both accounts have a picture of a human face as their profile picture. It’s clear the algorithm makes the odd error mis-identifying things that are not human faces as faces (including the back of a London Taxi Cab). There’s also the odd sketch or cartoon of a face, rather than a photo and some accounts have pictures of famous people, rather than obviously the account holder. Roughly a third of the followers of either account are identified as wearing glasses, three quarters of them look happy. Average ages in both cases were 30. The breakdown in terms of race is 70:13:11:7 White:Asian:Indian:Black. Finally, my followers are approximately 45% female, and those of ScraperWiki are about 30% female.

We’re now geared up to apply this to lists of Twitter followers – are you interested in learning more about your followers? Then send us an email and we’ll be in touch.

Book review: Hadoop in Action by Chuck Lam

HadoopInAction

This review was first published at ScraperWiki.

Hadoop in Action by Chuck Lam provides a brief, fairly technical introduction to the Hadoop Big Data ecosystem. Hadoop is an open source implementation of the MapReduce framework originally developed by Google to process huge quantities of web search data. The name MapReduce, refers to dividing up jobs amongst multiple processors (“Mapping”) and then recombining results to provide an answer to the problem (“Reducing”). Hadoop, allows users to process large quantities of data by distributing it across a network of relatively cheap computers. Each computer in the network has a portion of the data to process, and at the end it is combined together to give the final reult. Hadoop provides the infrastructure to enable this. In a sense it is a distributed operated system which provides fundamental services to applications such as Hive and Pig.

At ScraperWiki we’ve had many philosophical discussions about the meaning of Big Data. What size is Big Data? Is it one million lines? Is it one billion lines? Should we express it in terms of gigabytes and terabytes rather than lines?

For many, Big Data is data that requires you use Hadoop or similar to process.

Our view is that Big Data is data big enough to break the tools or hardware you commonly use, so for many of our customers this is a software limit based on Microsoft Excel. Technically Excel can handle a million or so lines but practically speaking life gets uncomfortable as you go above a few tens of thousands of rows.

The largest dataset a customer has come to us with so far is the UK MOT test data – results of the roadworthiness test for every vehicle on the road in the UK, over three years old. This dataset is about 100 million lines and 4 gigabytes per year, it’s available back to 2005 giving a total of approximately 1 billion lines and 32GB. A single year can be readily analysed on an Intel i7 laptop with 8GB RAM, MySQL and Tableau by readily I mean that some indexing jobs can take up to an hour but once indexed most queries are 10 – 20 minutes maximum.

At ScraperWiki a number of us have backgrounds in the physical sciences where we’ve been interested in computational intensive simulations involving clusters of commodity hardware which pre-date Hadoop (Beowulf), or big data from the Large Hadron Collider. Physical scientists have long been interested in parallel computing where the amount of data to move around is small but the amount of calculation to do is large. The point here is that parallel computing is possible for a subset of problems where a task can be divided up into smaller chunks of data and processing to be distributed amongst multiple processors or machines. In the case of photorealistic computer graphics rendering this might be frames of video, or a portion of a whole scene. Software like Matlab, Fortran and computer graphics renderers can parallelise certain operations with relative ease. The difficulty has always been turning your big computing problem into one of those “certain operations”. The Large Hadron Collider is an example more suited to the Hadoop style approach, the data flows are enormous but the calculations performed on that data are, comparatively less troublesome.

Hadoop in Action spends a significant amount of time discussing the core Hadoop system and MapReduce processing framework. I must admit to finding this part rather dull. I perked up when we reached Pig, described as a data processing language and Hive, a SQL-like system. One gets the impression the Pig system was built around a naming convention pushed too far: the Pig commandline is called Grunt and the language used by Pig is Pig Latin. Pig and Hive look like systems where I could sit down and run some queries with a language looks like my old friend, SQL.

The book finishes with some case studies, these are an image conversion problem, machine learning and data processing at China Mobile, the Stumbleupon social bookmarking system and providing search for IBM’s intranet. In the latter three cases users were migrating from SQL based systems running on monolithic hardware. To give an idea of scale: China Mobile collect terabytes of data per day across 100s of millions of customers, the IBM intranet has something like 100 million pages and 16 million documents and Stumbupon has 25 million users clicking their Stumble buttons about 1 billion times in a month.

Overall, a handy introduction to Hadoop although perhaps oddly pitched – it’s probably too technical for most managers, not technical enough for system administrators and with insufficient applications for data scientists. If you want to get hands on experience of Hadoop, then the Hortonworks Sandbox provides a pre-packaged virtual machine with a web interface for you to try out the various technologies.

If you want us to help you get value out of your big data or even Big Data, please get in touch!

Book review: Python for Data Analysis by Wes McKinney

PythonForDataAnalysis_cover

This review was first published at ScraperWiki.

As well as developing scrapers and a data platform, at ScraperWiki we also do data analysis. Some of this is just because we’re interested, other times it’s because clients don’t have the tools or the time to do the analysis they want themselves. Often the problem is with the size of the data. Excel is the universal solvent for data analysis problems – go look at any survey of data scientists. But Excel has it’s limitations. There are the technical limitations of something like a million rows maximum size but well before this size Excel becomes a pain to use.

There is another path – the programming route. As a physical scientist of moderate age I’ve followed these two data analysis paths in parallel. Excel for the quick look see and some presentation. Programming for bigger tasks, tasks I want to do repeatedly and types of data Excel simply can’t handle – like image data. For me the programming path started with FORTRAN and the NAG libraries, from which I moved into Matlab. FORTRAN is pure, traditional programming born in the days when you had to light your own computing fire. Matlab and competitors like Mathematica, R and IDL follow a slightly different path. At their core they are specialist programming languages but they come embedded in graphical environments which can be used interactively. You type code at a prompt and stuff happens, plots pop up and so forth. You can capture this interaction and put it into scripts/programs, or simply write programs from scratch.

Outside the physical sciences, data analysis often means databases. Physical scientists are largely interested in numbers, other sciences and business analysts are often interested in a mixture of numbers and categorical things. For example, in analysing the performance of a drug you may be interested in the dose (i.e. a number) but also in categorical features of the patient such as gender and their symptoms. Databases, and analysis packages such as R and SAS are better suited to this type of data. Business analysts appear to move from Excel to Tableau as their data get bigger and more complex. Tableau gives easy visualisation of database shaped data. It provides connectors to many different databases. My workflow at ScraperWiki is often Python to SQL database to Tableau.

Python for Data Analysis by Wes McKinney draws these threads together. The book is partly about the range of tools which make Python an alternative to systems like R, Matlab and their ilk and partly a guide to McKinney’s own contribution to this area: the pandas library. Pandas brings R-like dataframes and database-like operations to Python. It helps keep all your data analysis needs in one big Python-y tent. Dataframes are 2-dimensional tables of data whose rows and columns have indexes which can be numeric but are typically text. The pandas library provides a great deal of functionality to process Dataframes, in particular enabling filtering and grouping calculations which are reminiscent of the SQL database workflow. The indexes can be hierarchical. As well as the 2-dimensional Dataframe, pandas also provides 1-dimensional Series and a 3-dimensional Panel data structures.

I’ve already been using pandas in the Python part of my workflow. It’s excellent for importing data, and simplifies the process of reshaping data for upload to a SQL database and onwards to visualisation in Tableau. I’m also finding it can be used to help replace some of the more exploratory analysis I do in Tableau and SQL.

Outside of pandas the key technologies McKinney introduces are the ipython interactive console and the NumPy library. I mentioned the ipython notebook in my previous book review. ipython gives Python the interactive analysis capabilities of systems like Matlab. The NumPy library is a high performance library providing simple multi-dimensional arrays, comforting those who grew up with a FORTRAN background.

Why switch from commercial offerings like Matlab to the Python ecosystem? Partly it’s cost, the pricing model for Matlab has a moderately expensive core (i.e. $1000) with further functionality in moderately expensive toolboxes (more $1000s). Furthermore, the most painful and complex thing I did at my previous (very large) employer was represent users in the contractual interactions between my company and Mathworks to license Matlab and its associated tool boxes for hundreds of employees spread across the globe. These days Python offers me a wider range of high quality toolboxes, at it’s core it’s a respectable programming language with all the features and tooling that brings. If my code doesn’t run it’s because I wrote it wrong, not because my colleague in Shanghai has grabbed the last remaining network license for a key toolbox. R still offers statistical analysis with greater gravitas and some really nice, publication quality plotting but it does not have the air of a general purpose programming language.

The parts of Python for Data Analysis which I found most interesting, and engaging, were the examples of pandas code in “live” usage. Early in the book this includes analysis of first names for babies in the US over time, with later examples from the financial sector – in which the author worked. Much of the rest is very heavy on showing code snippets which is distracting from a straightforward reading of the book.  In some senses Mining the Social Web has really spoiled me – I now expect a book like this to come with an Ipython Notebook!