Category: Book Reviews

Reviews of books featuring a summary of the book and links to related material

Book review: Graph Theory and Complex Networks by Maarten van Steen

graph_theory

This review was first published at ScraperWiki.

My last read, on the Gephi graph visualisation package, was a little disappointing but gave me an enthusiasm for Graph Theory. So I picked up one of the books that it recommended: Graph Theory and Complex Networks: An Introduction by Maarten van Steen to learn more. In this context a graph is a collection of vertices connected by edges, the edges may be directed or undirected. The road network is an example of a graph; the junctions between roads are vertices, the edges are roads and a one way street is a directed edge – two-way streets are undirected.

Why study graph theory?

Graph theory underpins a bunch of things like route finding, timetabling, map colouring, communications routing, sol-gel transitions, ecologies, parsing mathematical expressions and so forth. It’s been a staple of Computer Science undergraduate courses for a while, and more recently there’s been something of a resurgence in the field with systems on the web provided huge quantities of graph-shaped data both in terms of the underlying hardware networks and the activities of people – the social networks.

Sometimes the links between graph theory and an application are not so obvious. For example, project planning can be understood in terms of graph theory. A task can depend on another task – the tasks being two vertices in a graph. The edge between such vertices is directed, from one to the other, indicating dependency. To give a trivial example: you need a chicken to lay an egg. As a whole a graph of tasks cannot contain loops (or cycles) since this would imply that a task depended on a task that could only be completed after it, itself had been completed. To return to my example: if you need an egg in order to get a chicken to lay an egg then you’re in trouble! Generally, networks of tasks should be directed acyclic graphs (or DAG) i.e. they should not contain cycles.

The book’s target audience is 1st or 2nd year undergraduates with moderate background in mathematics, it was developed for Computer Science undergraduates. The style is quite mathematical but fairly friendly. The author’s intention is to introduce the undergraduate to mathematical formalism. I found this useful, since mathematical symbols are difficult to search for and shorthands such  as operator overloading even more so. This said, it is still an undergraduate text rather than a popular accounts don’t expect an easy read or pretty pictures, or even pretty illustrations.

The book divides into three chunks. The first provides the basic language for describing graphs, both words and equations. The second part covers theorems arising from some of the basic definitions, including the ideas of “walks” – traversals of a graph which take in all vertices and “tours” which take in all edges. This includes long standing problems such as the Dijkstra’s algorithm for route finding, and the travelling salesman problem. Also included in this section are “trees” – networks with no cycles – where is a cycle is a closed walk which visits vertices just once.

The third section covers the analysis of graphs. This starts with metrics for measuring graphs such as vertex degree distributions, distance statistics and clustering measures. I found this section rather brief, and poorly illustrated. However, it is followed by an introduction to various classes of complex networks including the original random graphs(connect), small-world and scale-free networks. What is stuck me about complex graphs is that they are each complex in their own way. Random, small-world and scale-free networks are all methods for constructing a network in order to try to represent a known real world situation. Small-world networks arise from one of Stanley Milgram’s experiments: sending post across the US via social networks. The key feature is that there are clusters of people who know each other but these clusters are linked by the odd “longer range” contact.

The book finishes with some real world examples relating to the world wide web, peer-to-peer sharing algorithms and social networks. What struck me in social networks is that the vertices (people!) you identify as important can depend quite sensitively on the metric you use to measure importance.

I picked up Graph Theory after I’d been working with Gephi, wanting to learn more about the things that Gephi will measure for me. It serves that purpose pretty well. In addition I have a better feel for situations where the answer is “graph theory”. Furthermore, Gephi has a bunch of network generators to create random, small-world and scale-free networks so that you can try out what you’ve learned.

Book review: Network Graph Analysis and visualization with Gephi by Ken Cherven

network_gephi

This review was first published at ScraperWiki.

I generally follow the rule that if I haven’t got anything nice to say about something then I shouldn’t say anything at all. Network Graph Analysis and visualization with Gephi by Ken Cherven challenges this principle.

Gephi is a system for producing network visualisations, as such it doesn’t have a great many competitors. Fans of Unix will have used Graphviz for this purpose in the past but Gephi offers greater flexibility in a more user-friendly package. Graph theory and network analysis have been growing in importance over the past few years in part because of developments in the analysis of various complex systems using network science. As a physical scientist I’ve been aware of this trend, and it clearly also holds in the social sciences. Furthermore there is much increased availability of network information from social media such as Twitter and Facebook.

I’ve used Gephi a few times in the past, and to be honest there has been an air of desperate button clicking to my activities. That’s to say I felt Gephi could provide the desired output but I could only achieve it by accident. I have an old-fashioned enthusiasm for books even for learning about modern technology. Hence Network Graph Analysis and visualization with Gephi – the only book I could find with Gephi in the title. There is substantial online material to support Gephi but I hoped that this book would give me a better insight into how Gephi worked and some wider understand of graph theory and network analysis.

On the positive side I now have a good understanding of the superficial side of the interface, a feel for how a more expert user thinks about Gephi, and some tricks to try.

I discovered from Network Graph Analysis that the “Overview” view in Gephi is what you might call “Draft”, a place to prepare visualisations which allows detailed interaction. And the “Preview” view is what you might call “Production”, a place where you make a final, beautiful version of your visualisations.

The workflow for Gephi is to import data and then build a visualisation using one of a wide range of layout algorithms. For example, force-based layouts assume varying forces between nodes for which an arrangement of nodes can be calculated by carrying out a pseudo-physical simulations. These algorithms can take a while to converge, and may get trapped in local minima. The effect of these layout algorithms is to reveal some features of the network. For example, the force layouts can reveal clusters of nodes which might also be discovered by a more conventional statistical clustering algorithm. The concentric layout allows a clearer visualisation of hierarchy in a network.

It’s clear that the plugin ecosystem is important to the more experienced user of Gephi. Plugins provide layout algorithms, data helpers, new import and export functionality, analysis and so forth. You can explore them in the Gephi marketplace.

Cherven recommends a fairly small, apparently well-chosen set of references to online resources and books. The Visual Complexity website looks fabulous. You can read the author’s complete, pre-publication draft of Networks, Crowds and Markets: Reasoning about a highly connected world by David Easley and Jon Kleinberg here. It looks good but it’s nearly 800 pages! I’ve opted for the rather shorter Graph Theory and Complex Networks: An Introduction by Maarten van Steen.

On the less positive side, this is an exceedingly short book. I read it in a couple of 40 minute train journeys. It’s padded with detailed descriptions of how to install Gephi and plugins, including lots of screenshots. The coverage is superficial, so whilst features may be introduced the explanation often tails off into “…and you can explore this feature on your own”.

Network Graph Analysis is disappointing, it does bring a little enlightenment to a new user of Gephi but not very much. A better book would have provided an introduction to network and graph analysis with Gephi the tool to provide practical experience and examples, in the manner that Data Mining does for weka and Natural Language Processing with Python does for the nltk library.

This book may be suitable for someone who is thinking about using Gephi and isn’t very confident about getting started. The best alternative that I’ve found is the online material on GitHub (here).

Book review: Falling Upwards by Richard Holmes

fallingupwardsI read Richard Holmes book The Age of Wonder some time ago, in it he made a brief mention of balloons in the 18th century. It pricked my curiosity, so when I saw his book Falling Upwards, all about balloons, I picked it up.

The chapters of Falling Upwards cover a series of key points in the development of ballooning, typically hydrogen balloons from the last couple of decades of the 18th century to the early years of the 20th century. One of the early stories is a flight from my own home city, Chester. Thomas Baldwin recorded his flight in Airopaidia: Containing the Narrative of a Balloon Excursion from Chester, the eighth of September, 1785. The book does not have the air of a rigorous history of ballooning, it introduces technical aspects but not systematically. It is impressionistic to a degree, and as a result a rather pleasant read. For Holmes the artistic and social impact of balloons are as important as the technical.

In the beginning there was some confusion as to the purposes to which a balloon might be put, early suggestions included an aid to fast messengers who would stay on the ground to provide but use a small balloon to give them “10 league boots”, there were similar suggestions for helping heavy goods vehicles.

In practice for much of the period covered balloons were used mainly for entertainment – both for pleasure trips but also aerial displays involving acrobatics and fireworks. Balloons were also used for military surveillance.  Holmes provides chapters on their use in the American Civil War by the Union side (and very marginally by the Confederates). And in the Franco-Prussian war they were used to break the Prussian siege of Paris (or at least bend it). The impression gained though is that they were something like novelty items for surveillance. By the time of the American Civil War in the 1860’s it wasn’t routine or obvious that one must use balloon surveillance, it wasn’t a well established technique. This was likely a limitation of both the balloons themselves and the infrastructure required to get them in the air.

Balloons gave little real utility themselves, except in exceptional circumstances, but they made a link to heavier-than-air flight. They took man into the air, and showed the possibilities but for practical purposes generally didn’t deliver – largely due to their unpredictability. To a large extent you have little control of where you will land in a balloon once you have gone up. Note, for example, that balloons were used to break the Prussian siege of Paris in the outbound direction only. A city the size of Paris is too small a target to hit, even for highly motivated fliers.

Nadar (pseudonym of Gaspard-Félix Tournachon), who lived in Paris, was one of the big promoters of just about anything. He fought a copyright battle with his brother over his, adopted, signature. Ballooning was one of his passions, he inspired Jules Verne to starting writing science fiction. His balloon, Le Géant, launched in 1863 was something of a culmination in ballooning – it was enormous – 60 metres high but served little purpose other than to highlight the limitations of the form – as was Nadar’s intent.

From a scientific point of view Falling Upwards covers James Glaisher and Henry Coxwell’s flights in the mid-nineteenth century. I was impressed by Glaisher’s perseverance in taking manual observations at a rate of one every 9 seconds throughout a 90 minute flight. Glaisher had been appointed by the British Association for the Advancement of Science to do his work, he was Superintendent for Meteorology and Magnetism at the Royal Greenwich Observatory. With his pilot Henry Coxwell he made a record-breaking ascent to approximately 8,800 meters in 1862, a flight they were rather lucky to survive. Later in the 19th century other scientists were to start to identify the layers in the atmosphere. Discovering that it is only a thin shell – 5 miles or so thick which is suitable for life.

The final chapter is on the Salomon Andrée’s attempt to reach the North Pole by balloon, as with so many polar stories it ends in cold, lonely, perhaps avoidable death for Andrée and his two colleagues. Their story was discovered when the photos and journals were recovered from White Island in the Artic Circle, some 30 years after they died.

Falling Upwards is a rather conversational history. Once again I’m struck by the long periods for technology to reach fruition. It’s true that from a technology point of view that heavier-than-air flight is very different from ballooning. But it’s difficult to imagine doing the former without the later.

Book review: Big data by Viktor Mayer-Schönberger and Kenneth Cukier

BigData

This review was first published at ScraperWiki.

We hear a lot about “Big Data” at ScraperWiki. We’ve always been a bit bemused by the tag since it seems to be used indescriminately. Just what is big data and is there something special I should do with it? Is it even a uniform thing?

I’m giving a workshop on data science next week and one of the topics of interest for the attendees is “Big Data”, so I thought I should investigate in a little more depth what people mean by “Big Data”. Hence I have read Big Data by Viktor Mayer-Schönberger and Kenneth Cukier, subtitled “A Revolution That Will Transform How We Live, Work and Think” – chosen for the large number of reviews it has attracted on Amazon. The subtitle is a guide to their style and exuberance.

Their thesis is that we can define big data, in contrast to earlier “little data”, by three things:

  • It’s big but not necessarily that big, their definition for big is that n = all. That is to say that in some domain you take all of the data you can get hold of. They use as one example a study on bout fixing in sumo wrestling, based on  data on 64,000 bouts – which would fit comfortably into a spreadsheet. Other data sets discussed are larger such as credit card transaction data, mobile telephony data, Google’s search query data…;
  • Big data is messy, it is perhaps incomplete or poorly encoded. We may not have all the data we want from every event, it may be encoded using free text rather than strict categories and so forth;
  • Working with big data we must discard an enthusiasm for causality and replace it with correlation. Working with big data we shouldn’t mind too much if our results are just correlations rather than explanations (causation);
  • An implicit fourth element is that the analysis you are going to apply to your big data is some form of machine learning.

I have issues with each of these “novel” features:

Scientists have long collected datasets and done calculations that are at the limit (or beyond) their ability to process the data produced. Think protein x-ray crystallography, astronomical data for navigation, the CERN detectors etc etc. You can think of the decadal censuses run by countries such as the US and UK as n = all. Or the data fed to the early LEO computer to calculate the deliveries required for each of their hundreds of teashops. The difference today is that people and companies are able to effortlessly collect a larger quantity of data than ever before. They’re able to collect data without thinking about it first. The idea of n = all is not really a help. The straw man against which it is placed is the selection of a subset of data by sampling.

They say that big data is messy implying that what went before was not. One of the failings of the book is their disregard for those researchers that have gone before. According to them the new big data analysts are comfortable with messiness and uncertainty, unlike those fuddy-duddy statisticians! Small data is messy, scientists and statisticians have long dealt with messy and incomplete data.

The third of their features: we must be comfortable with correlation rather than demand causation. There are many circumstances where correlation is OK, such as when Amazon uses my previous browsing and purchase history to suggest new purchases but the area of machine learning / data mining has long struggled with messiness and causality.

This is not to say nothing has happened in the last 20 or so years regarding data. The ubiquity of computing devices, cheap storage and processing power and the introduction of frameworks like Hadoop are all significant innovations in the last 20 years. But they grow on things that went before, they are not a paradigm shift. Labelling something as ‘big data’, so ill-defined, provides no helpful insight as to how to deal with it.

The book could be described as the “What Google Did Next…” playbook. It opens with Google’s work on flu trends, passes through Google’s translation work and Google Books project. It includes examples from many other players but one gets the impression that it is Google they really like. They are patronising of Amazon for not making full use of the data they glean from their Kindle ebook ecosystem. They pay somewhat cursory attention to issues of data privacy and consent, and have the unusual idea of creating a cadre of algorithmists who would vet the probity of algorithms and applications in the manner of accountants doing audit or data protection officers.

So what is this book good for? It provides a nice range of examples of data analysis and some interesting stories regarding the use to which it has been put. It gives a fair overview of the value of data analysis and some of the risks it presents. It highlights that the term “big data” is used so broadly that it conveys little meaning. This confusion over what is meant by “Big Data” is reflected on the datascience@Berkeley blog which lists definitions of big data from 30 people in the field (here). Finally, it provides me with sufficient cover to make a supportable claim that I am a “Big Data scientist”!

To my mind, the best definition of big data that I’ve seen is that it is like teenage sex…

  • Everyone talks about it,
  • nobody really knows how to do it,
  • everyone thinks everyone else is doing it,
  • so everyone claims they are doing it too!

Book review: Greenwich Time and the Longitude by Derek Howse

greenwich_timeI am being used as a proxy reader! My colleague drj, impressed by my reviewing activities, asked me to read Greenwich Time and the Longitude by Derek Howse, so that he wouldn’t have to.

There was some risk here that Greenwich Time and the Longitude would overlap heavily with Finding Longitude which I have recently read. They clearly revolve around the same subjects and come from the same place: the National Maritime Museum at Greenwich. Happily the overlap is relatively minor. Following some brief preamble regarding the origins of latitude and longitude for specifying locations, Greenwich Time starts with the founding of the Royal Observatory at Greenwich.

The Observatory was set up under Charles II who personally ordered it’s creation in 1675, mindful of the importance of astronomy to navigation. The first Royal Astronomer was John Flamsteed. Accurate measurement of the locations of the moon and stars was a prerequisite for determining the longitude at sea both by lunar-distance and clock based means. Flamsteed’s first series of measurements was aimed at determining whether the earth rotated at a constant rate, something we take for granted but wasn’t necessarily the case.

Flamsteed is notorious for jealously guarding the measurements he made, and fell out with Isaac Newton over their early, unauthorised publication which Newton arranged. A detail I’d previously missed in this episode is that Flamsteed was not very well remunerated for his work, his £100 per annum salary had to cover the purchase of instruments as well as any skilled assistance he required which goes some way to explaining his possessiveness over the measurements he made. 

Greenwich Time covers the development of marine chronometers in the 18th century and the period of the Board of Longitude relatively quickly.

The next step is the distribution of time. Towards the middle of the 19th century three industries were feeling the need for precise timekeeping: telegraphy, the railways and the postal service. This is in addition to the requirements of marine navigators. The first time signal, in 1833, was distributed by the fall of a large painted zinc ball on the top of the Greenwich observatory. Thereafter, strikingly similar balls appeared on observatories around the world.

From 1852 the time signal was distributed by telegraphic means, and ultimately by radio. It was the radio time signal that ultimately brought an end to the publication of astronomical tables for navigation. Britain’s Nautical Almanac, started in 1767, stopped publishing them in 1907 – less than 10 years after the invention of radio.

With the fast distribution of time signals over large distances came the issue of the variation between local time (as defined by the sun and stars) and the standard time. The problem was particularly pressing in the United States which spanned multiple time zones. The culmination of this problem is the International Date Line, which passes through the Pacific. Here the day of the week changes on crossing the line, a problem discovered by the very first circumnavigators (Magellan’s expedition in 1522), identified when they reached travellers who had arrived from the opposite direction and disagreed on the day of the week. I must admit to being a bit impressed by this, I can imagine it’s easy to lose track of the days on such an expedition.

I found the descriptions of congresses to standardise the meridian and time systems across multiple nations in the 1880s rather dull.

One small thing of interest in these discussions: mariners used to measure the end of the day at noon, hence what we would call “Monday morning” a mariner would call “the end of Sunday”, unless he was at harbour – in which case he would use local time! It is from 18th century mariners that Jean Luc Picard appears to get his catchphrase “Make it so!”, this was the traditional response of a captain to the officer making the noon latitude measurement. The meridian congresses started the process of standardising the treatment of the day by “civilians”, mariners and astronomers.

The book finishes with a discussion of high precision timekeeping. This is where we discover that Flamsteed wasn’t entirely right when he measured the earth to rotate at a constant rate. The earth’s rotation is showing a long term decrease upon which are superimposed irregular variations and seasonal variations. And the length of the year is slowly changing too. Added to that, the poles drift by about 8 metres or so over time. It’s testament to our abilities that we can measure these imperfections but somehow sad that they exist.

The book has an appendix with some detail on various measurements.

Not as sumptuous a book as Finding Longitude it is an interesting read with a different focus. It has some overlap too with The History of Clocks and Watches by Eric Bruton.