Category: Book Reviews

Reviews of books featuring a summary of the book and links to related material

Book review: Maskelyne – Astronomer Royal edited by Rebekah Higgitt

MaskelyneOver the years I’ve read a number of books around the Royal Observatory at Greenwich: books about finding the longitude or about people.

Maskelyne – Astronomer Royal edited by Rebekah Higgitt is unusual for me – it’s an edited volume of articles relating to Nevil Maskelyne by a range of authors rather than a single author work. Linking these articles are “Case Studies” written by Higgitt which provide background and coherence.

The collection includes articles on the evolution of Maskelyne’s reputation, Robert Waddington – who travelled with him on his St Helena trip, his role as a manager, the human computers used to calculate the tables in the Nautical Almanac, his interactions with clockmakers, his relationships with savants across Europe, his relationship with Joseph Banks, and his family life.

The Royal Observatory with its Astronomer Royal was founded by Charles II in 1675 with the goal of making astronomical observations to help with maritime navigation. The role gained importance in 1714 with the passing of the Longitude Act, which offered a prize to anyone who could present a practical method of finding the longitude at sea. The Astronomer Royal was one of the appointees to the Board of Longitude who judged applications. The observations and calculations done, and directed, from the Observatory were to form an important part of successful navigation at sea.

The post of Astronomy Royal was first held by John Flamsteed and then Edmund Halley. A persistent problem to the time of Maskelyne was the publication of the observations of the Astronomers Royal. Flamsteed and Newton notoriously fell out over such measurements. It seems very odd to modern eyes, but the observations the early Astronomers Royal made they essentially saw as their personal property, removed by executors on their death and thus lost to the nation. Furthermore, in the time of Maskelyne the Royal Observatory was not considered the pre-eminent observatory in Britain in terms of the quality of its instruments or observations.

Maskelyne’s appointment was to address these problems. He made the observations of the Observatory available to the Royal Society (the Visitors of the Observatory) on an annual basis and pushed for the publication of earlier observations. He made the making of observations a much more systematic affair, and he had a keen interest in the quality of the instruments used. Furthermore, he started the publication of the Nautical Almanac which provided sailors with a relatively quick method for calculating their longitude using the lunar distance method. He was keenly aware of the importance of providing accurate, reliable observational and calculated results.

He was appointed Astronomer Royal in 1765 not long after a trip to St Helena to make measurements of the first of a pair of Venus transits in 1761, to this he added a range of other activities which including testing the lunar distance method for finding longitude, the the “going” of precision clocks over an extended period and Harrison’s H4 chronometer. In later years he was instrumental in coordinating a number of further scientific expeditions doing things such as ensuring uniform instrumentation, providing detailed instructions for observers and giving voyages multiple scientific targets.

H4 is a primary reason for Maskelyne’s “notoriety”, in large part because of Dava Sobel’s book on finding the longitude where he is portrayed as the villain against the heroic clockmaker, John Harrison. By 1761 John Harrison had been working on the longitude problem by means of clocks for many years. Sobel’s presentation sees Maskelyne as a biased judge, favouring the Lunar distance method for determining longitude acting in his own interests against Harrison.

Professional historians of science have long felt that Maskelyne was hard done by Sobel’s biography. This book is not a rebuttal of Sobel’s but is written with the intention of bringing more information regarding Maskelyne to a general readership. It’s also stimulated by the availability of new material regarding Maskelyne.

Much of the book covers Maskelyne’s personal interactions with a range of people and groups. It details his exchanges with the “computers” who did the lengthy calculations which went into the Nautical Almanac; his interactions with a whole range of clockmakers for whom he often recommended to others looking for precision timepieces for astronomical purposes. It also discusses his relationships with other savants across Europe and the Royal Society. His relationship with Joseph Banks garners a whole chapter. A proposition in one chapter is that such personal, rather than institutional, relationships were key to 18th century science, I can’t help feeling this is still the case.

The theme of these articles is that Maskelyne was a considerate and competent man, going out of his way to help and support those he worked with. To my mind his hallmark is bringing professionalism to the business of astronomy.

In common with Finding Longitude this book is beautifully produced, and despite the multitude of authors it hangs together nicely. It’s not really a biography of Maskelyne but perhaps better for that.

Book review: Linked by Albert-László Barabási

This review was first posted at ScraperWiki.
linkedI am on a bit of a graph theory binge, it started with an attempt to learn about Gephi, the graph visualisation software, which developed into reading a proper grown up book on graph theory. I then learnt a little more about practicalities on reading Seven Databases in Seven Weeks, which included a section on Neo4J – a graph database. Now I move on to Linked by Albert-László Barabási, this is a popular account of the rise of the analysis of complex networks in the late nineties. A short subtitle used on earlier editions was “The New Science of Networks”. The rather lengthy subtitle on this edition is “How Everything Is Connected to Everything Else and What It Means for Business, Science, and Everyday Life”.

In mathematical terms a graph is an abstract collection of nodes linked by edges. My social network is a graph comprised of people, the nodes, and their interactions such as friendships, which are the edges. The internet is a graph, comprising routers at the nodes and the links between them are edges. “Network” is a less formal term often used synonymously with graph, “complex” is more a matter of taste but it implies large and with a structure which cannot be trivially described i.e. each node has four edges is not a complex network.
The models used for the complex networks discussed in this book are the descendants of the random networks first constructed by Erdős and Rényi. They imagined a simple scheme whereby nodes in a network were randomly connected with some fixed probability. This generates a particular type of random network which do not replicate real-world networks such as social networks or the internet. The innovations introduced by Barabási and others are in the measurement of real world networks and new methods of construction which produce small-world and scale-free network models. Small-world networks are characterised by clusters of tightly interconnected nodes with a few links between those clusters, they describe social networks. Scale-free networks contain nodes with any number of connections but where nodes with larger numbers of connections are less common than those with a small number. For example on the web there are many web pages (nodes) with a few links (edges) but there exist some web pages with thousands and thousands of links, and all values in between.
I’ve long been aware of Barabási’s work, dating back to my time as an academic where I worked in the area of soft condensed matter. The study of complex networks was becoming a thing at the time, and of all the areas of physics soft condensed matter was closest to it. Barabási’s work was one of the sparks that set the area going. The connection with physics is around so-called power laws which are found in a wide range of physical systems. The networks that Barabási is so interested in show power law behaviour in the number of connections a node has. This has implications for a wide range of properties of the system such as robustness to the removal of nodes, transport properties and so forth. The book starts with some historical vignettes on the origins of graph theory, with Euler and the bridges of Königsberg problem. It then goes on to discuss various complex networks with some coverage of the origins of their study and the work that Barabási has done in the area. As such it is a pretty personal review. Barabási also recounts some of the history of “six degrees of separation”, the idea that everyone is linked to everyone else by only six links. This idea had its traceable origins back in the early years of the 20th century in Budapest.
Graph theory has been around for a long while, and the study of random networks for 50 years or so. Why the sudden surge in interest? It boils down to a couple of factors, the first is the internet which provides a complex network of physical connections on which a further complex network of connections sit in the form of the web. The graph structure of this infrastructure is relatively easy to explore using automatic tools, you can build a map of millions of nodes with relative ease compared to networks in the “real” world. Furthermore, this complex network infrastructure and the rise of automated experiments has improved our ability to explore and disseminate information on physical networks. For example, the network of chemical interactions in a cell, the network of actors in movies, our social interactions, the spread of disease and so forth. In the past getting such detailed information on large networks was tiresome and the distribution mechanisms for such data slow and inconvenient.
For a book written a few short years ago, Linked can feel strangely dated. It discusses Apple’s failure in the handheld computing market with the Newton palm top device, and the success of Palm with their subsequent range. Names of long forgotten internet companies float by, although even at the time of writing Google was beginning its dominance.
If you are new to graph theory and want an unchallenging introduction then Linked is a good place to start. It’s readable and has a whole load of interesting examples of scale free networks in the wild. Whilst not the whole of graph theory, this is where interesting new things are happening.

Book review: Seven databases in Seven Weeks by Eric Redmond and Jim R. Wilson

sevendatabases

This review was first published at Scraperwiki.

I came to databases a little late in life, as a physical scientist I didn’t have much call for them. Then a few years ago I discovered the wonders of relational databases and the power of SQL. The ScraperWiki platform strongly encourages you to save data to SQLite databases to integrate with its tools.

There is life beyond SQL databases much of it evolved in the last few years. I wanted to learn more and a plea on twitter quickly brought me a recommendation for Seven databases in Seven Weeks by Eric Redmond and Jim R. Wilson.

The book covers the key classes of database starting with relational databases in the form of PostgreSQL. It then goes on to look at six further databases in the so-called NoSQL family – all relatively new compared to venerable relational databases. The six other databases fall into several classes: Riak and Redis are key-value stores, CouchDB and MongoDB are document databases, HBase is a columnar database and Neo4J is a graph database.

Relational databases are characterised by storage schemas involving multiple interlinked tables containing rows and columns, this layout is designed to minimise the repetition of data and to provide maximum query-ability. Key-value stores only store a key and a value in the manner of a dictionary but the “value” may be of a complex type. A value can be returned very fast given a key – this is the core strength of the key-value stores. The document stores MongoDB and CouchDB store JSON “documents” rather than rows. These documents can store information in nested hierarchies which don’t necessarily need to all have the same structure this allows maximum flexibility in the type of data to be stored but at the cost of ease of query.

HBase fits into the Hadoop ecosystem, the language used to describe it looks superficially like that used to describe tables in a relational database but this is a bit misleading. HBase is designed to work with massive quantities of data but not necessarily give the full querying flexibility of SQL. Neo4J is designed to store graph data – collections of nodes and edges and comes with a query language particularly suited to querying (or walking) data so arranged. This seems very similar to triplestores and the SPARQL – used in semantic web technologies.

Relational databases are designed to give you ACID (Atomicity, Consistency, Isolation, Durability), essentially you shouldn’t be able to introduce inconsistent changes to the database and it should always give you the same answer to the same query. The NoSQL databases described here have a subtly different core goal. Most of them are designed to work on the web and address CAP (Consistency, Availability, Partition), indeed several of them offer native REST interfaces over HTTP which means they are very straightforward to integrate into web applications. CAP refers to the ability to return a consistent answer, from any instance of the database, in the face of network (or partition) problems. This assumes that these databases may be stored in multiple locations on the web. A famous theorem contends that you can have any two of Consistency, Availability and Partition resistance at any one time but not all three together.

NoSQL databases are variously designed to scale horizontally and vertically. Horizontal scaling means replicating the same database in multiple places to provide greater capacity to serve requests even with network connectivity problems. Vertically scaling by “sharding” provides the ability to store more data by fragmenting the data such that some items are stored on one server and some on another.

I’m not a SQL expert by any means but it’s telling that I learnt a huge amount about PostgreSQL in the forty or so pages on the database. I think this is because the focus was not on the SQL query language but rather on the infrastructure that PostgreSQL provides. For example, it discusses triggers, rules, plugins and specialised indexing for text search. I assume this style of coverage applies to the other databases. This book is not about the nitty-gritty of querying particular database types but rather about the different database systems.

The NoSQL databases generally support MapReduce style queries this is a scheme most closely associated with Big Data and the Hadoop ecosystem but in this instance it is more a framework for doing queries which maybe executed across a cluster of computers.

I’m on a bit of a graph theory binge at the moment so Neo4J was the most interesting to me.

As an older data scientist I have a certain fondness for things that have been around for a while, like FORTRAN and SQL databases, I’ve looked with some disdain at these newfangled NoSQL things. To a degree this book has converted me, at least to the point where I look at ScraperWiki projects and think – “It might be better to use a * database for this piece of work”.

This is an excellent book which was pitched at just the right level for my purposes, I’ll be looking for more Pragmatic Programmers books in future.

Book review: Pompeii by Mary Beard

For a change I have been reading about Roman history, in the form of Pompeii: The Life of a Roman Town by Mary Beard.

Mary Beard is a Cambridge classicist. I think it helps having seen her on TV, jabbing her figure at a piece of Roman graffiti, explaining what it meant and why it was important with obvious enthusiasm. For me it gave the book a personality.

I imagine I am not unusual in gaining my knowledge of Roman culture via some poorly remembered caricature presented in pre-16 history classes at school and films including the Life of Brian, Gladiator and Up Pompeii.

Pompeii is an ancient Italian town which was covered in a 4-6 metre blanket of ash by an eruption of nearby Vesuvius in 79 AD. Beneath the ash the town was relatively undamaged. It was rediscovered in 1599 but excavations only started in the mid 18th century. These revealed a very well-preserved town including much structure, artwork and the remains of the residents. The bodies of the fallen left voids in the ash which were reconstructed by filling them with plaster.

The book starts with a salutatory reminder that Pompeii wasn’t a town frozen in normal times but one in extremis as it succumbed to a volcanic eruption. We can’t assume that the groups of bodies found or the placement of artefacts represent how they might have been found in normal daily life.

There are chapters on the history of the city, the streets, homes, painting, occupations, administration, various bodily pleasures (food, wine, sex and bathing), entertainment (theatre and gladiators) and temples.

I’ve tended to think of the Roman’s as a homogeneous blob who occupied a chunk of time and space. But this isn’t the case, the pre-Roman history of the town features writing in the Oscan language. The Greek writer Strabo, working in the first century BC wrote about a sequence of inhabitants: Oscans, Etruscans, Pelasgians and then Samnites – who also spoke Oscan.

Much of what we know of Pompeii seems to stem from the graffiti found all about the remains. It would be nice to learn a bit more about this evidence since it seems important, and clearly something different is going on from what we find in modern homes and cities. If I look around homes I know today then none feature graffiti, granted there is much writing on paper but not on the walls.

From the depths of my memory I recall the naming of various rooms in the Roman bath house but it turns out these names may not have been in common usage amongst the Romans. Furthermore, the regimented progression from hottest to coldest bath may also be somewhat fanciful. Something I also didn’t appreciate was that the meanings of some words in ancient Latin are not known, or are uncertain. It’s obvious in retrospect that this might be the case but caveats on such things are rarely heard.

Beard emphasises that there has been a degree of “over-assumption” in the characterisation of the various buildings in Pompeii. For instance on some reckonings there are huge numbers of bars and brothels. So for instance, anything with a counter and some storage jars gets labelled a bar. Anything with phallic imagery gets labelled a brothel, the Pompeiian’s were very fond of phallic imagery. A more conservative treatment brings these numbers down enormously.

I am still mystified by the garum, the fermented fish sauce apparently loved by many, it features moderately in the book since the house of a local manufacturer is one of the better preserved ones, and one which features very explicit links to his trade. It sounds absolutely repulsive.

The degree of preservation in Pompeii is impressive, the scene that struck me most vividly was in The House of Painters at Work. In this case the modern label for the house describes exactly what was going on, other houses are labelled with the names of dignitaries present when a house was uncovered, or after key objects found in the house. It is not known what the inhabitants called the houses, or even the streets. Deliveries seemed to go by proximity to prominent buildings.

I enjoyed Pompeii, the style is readable and it goes to some trouble to explain the uncertainty and subtlety in interpreting ancient remains.

Once again I regret buying a non-fiction book in ebook form, the book has many illustrations including a set of colour plates and I still find it clumsy looking at them in more detail or flicking backwards and forwards in an ereader.

Book review: Graph Theory and Complex Networks by Maarten van Steen

graph_theory

This review was first published at ScraperWiki.

My last read, on the Gephi graph visualisation package, was a little disappointing but gave me an enthusiasm for Graph Theory. So I picked up one of the books that it recommended: Graph Theory and Complex Networks: An Introduction by Maarten van Steen to learn more. In this context a graph is a collection of vertices connected by edges, the edges may be directed or undirected. The road network is an example of a graph; the junctions between roads are vertices, the edges are roads and a one way street is a directed edge – two-way streets are undirected.

Why study graph theory?

Graph theory underpins a bunch of things like route finding, timetabling, map colouring, communications routing, sol-gel transitions, ecologies, parsing mathematical expressions and so forth. It’s been a staple of Computer Science undergraduate courses for a while, and more recently there’s been something of a resurgence in the field with systems on the web provided huge quantities of graph-shaped data both in terms of the underlying hardware networks and the activities of people – the social networks.

Sometimes the links between graph theory and an application are not so obvious. For example, project planning can be understood in terms of graph theory. A task can depend on another task – the tasks being two vertices in a graph. The edge between such vertices is directed, from one to the other, indicating dependency. To give a trivial example: you need a chicken to lay an egg. As a whole a graph of tasks cannot contain loops (or cycles) since this would imply that a task depended on a task that could only be completed after it, itself had been completed. To return to my example: if you need an egg in order to get a chicken to lay an egg then you’re in trouble! Generally, networks of tasks should be directed acyclic graphs (or DAG) i.e. they should not contain cycles.

The book’s target audience is 1st or 2nd year undergraduates with moderate background in mathematics, it was developed for Computer Science undergraduates. The style is quite mathematical but fairly friendly. The author’s intention is to introduce the undergraduate to mathematical formalism. I found this useful, since mathematical symbols are difficult to search for and shorthands such  as operator overloading even more so. This said, it is still an undergraduate text rather than a popular accounts don’t expect an easy read or pretty pictures, or even pretty illustrations.

The book divides into three chunks. The first provides the basic language for describing graphs, both words and equations. The second part covers theorems arising from some of the basic definitions, including the ideas of “walks” – traversals of a graph which take in all vertices and “tours” which take in all edges. This includes long standing problems such as the Dijkstra’s algorithm for route finding, and the travelling salesman problem. Also included in this section are “trees” – networks with no cycles – where is a cycle is a closed walk which visits vertices just once.

The third section covers the analysis of graphs. This starts with metrics for measuring graphs such as vertex degree distributions, distance statistics and clustering measures. I found this section rather brief, and poorly illustrated. However, it is followed by an introduction to various classes of complex networks including the original random graphs(connect), small-world and scale-free networks. What is stuck me about complex graphs is that they are each complex in their own way. Random, small-world and scale-free networks are all methods for constructing a network in order to try to represent a known real world situation. Small-world networks arise from one of Stanley Milgram’s experiments: sending post across the US via social networks. The key feature is that there are clusters of people who know each other but these clusters are linked by the odd “longer range” contact.

The book finishes with some real world examples relating to the world wide web, peer-to-peer sharing algorithms and social networks. What struck me in social networks is that the vertices (people!) you identify as important can depend quite sensitively on the metric you use to measure importance.

I picked up Graph Theory after I’d been working with Gephi, wanting to learn more about the things that Gephi will measure for me. It serves that purpose pretty well. In addition I have a better feel for situations where the answer is “graph theory”. Furthermore, Gephi has a bunch of network generators to create random, small-world and scale-free networks so that you can try out what you’ve learned.