Category: Book Reviews

Reviews of books featuring a summary of the book and links to related material

Book review: Engineering Empires by Ben Marsden and Crosbie Smith

engineering-empiresCommonly I read biographies of dead white men in the field of science and technology. My next book is related but a bit different: Engineering Empires: A Cultural History of Technology in Nineteenth-Century Britain by Ben Marsden and Crosbie Smith. This is a more academic tome but rather than focussing on a particular dead white man they are collected together in a broader story. A large part of the book is about steam engines with chapters on static steam engines, steamships and railways but alongside this are chapters on telegraphy and mapping and measurement.

The book starts with a chapter on mapping and measurement,  there’s a lot of emphasis here on measuring the earth’s magnetic field. In the eighteen and nineteenth centuries there was some hope that maps of magnetic field variation might provide help in determining the longitude. The subject makes a reprise later on in the discussion on steamships. The problem isn’t so much the steam but that steamships were typically iron-hulled which throws compass measurements awry unless careful precautions are taken. This was important as steamships were promoted for their claimed superior safety over sailing vessels, but risked running aground on the reef of dodgy compass behaviour in inshore waters. The social context for this chapter is the rise of learned societies to promote such work, the British Association for the Advancement of Science is central here, and is a theme through the book. In earlier centuries the Royal Society was more important.

The next three chapters cover steam power, first in the factory and the mine then in boats and trains. Although James Watt plays a role in the development of steam power, the discussion here is broader covering Ericsson’s caloric engine amongst many other things. Two themes of steam are the professionalisation of the steam engineer, and efficiency. “Professionalisation” in the sense that when businessmen made investments in these relatively capital intensive devices they needed confidence in what they were buying into. A chap that appeared to have just knocked something up in his shed didn’t cut it. Students of physics will be painfully aware of thermodynamics and the theoretical efficiency of engines. The 19th century was when this field started, and it was of intense economic importance. For a static engine efficiency is important because it reduces running costs. For steamships efficiency is crucial, less coal for the same power means you don’t run out of steam mid-ocean!

Switching the emphasis of the book from people to broader themes casts the “heroes” in a new light. It becomes more obvious that Isambard Kingdom Brunel is a bit of an outlier, pushing technology to the limits and sometimes falling off the edge. The Great Eastern was a commercial disaster only gaining a small redemption when it came to lying transatlantic telegraph cables. Success in this area came with the builders of more modest steamships dedicated to particular tasks such as the transatlantic mail and trips to China.

The book finishes with a chapter on telegraphy, my previous exposure to this was via Lord Kelvin who had been involved in the first transatlantic electric telegraphs. The precursor to electric telegraphy was optical telegraphy which had started to be used in France towards the end of the 18th century. Transmission speeds for optical telegraphy were surprisingly high: Paris to Toulon (on the Mediterranean coast), a distance of more than 800km, in 20 minutes. In Britain the telegraph took off when it was linked with the railways which provided a secure, protected route by which to send the lines. Although the first inklings of electric telegraphy came in in mid-18th century it didn’t get going until 1840 or so but by 1880 it was a globe spanning network crossing the Atlantic and reaching the Far east overland. It’s interesting to see the mention of Julius Reuter and Associated Press back at the beginning of electric telegraphy, they are still important names now.

In both steamships and electric telegraphy Britain led the way because it had an Empire to run, and communication is important when you’re running an empire. Electric telegraphy was picked up quickly on the eastern seaboard of the US as well.

I must admit I was a bit put off by the introductory chapter of Engineering Empires which seemed to be a bit heavy and spoke in historological jargon but once underway I really enjoyed the book. I don’t know whether this was simply because I got used to the style or the style changed. As proper historians Marsden and Smith do not refer to scientists in the earlier years of the 19th century as such, they are “gentlemen of science” and later “men of science”. They sound a bit contemptuous of the “gentlemen of science”. The book is a bit austere and worthy looking. Overall I much prefer this manner of presentation of the wider context rather than a focus on a particular individual.

Book review: Data Science at the Command Line by Jeroen Janssens

 

datascienceatthecommandlineThis review was first published at ScraperWiki.

In the mixed environment of ScraperWiki we make use of a broad variety of tools for data analysis. Data Science at the Command Line by Jeroen Janssens covers tools available at the Linux command line for doing data analysis tasks. The book is divided thematically into chapters on Obtaining, Scrubbing, Modeling, Interpreting Data with “intermezzo” chapters on parameterising shell scripts, using the Drake workflow tool and parallelisation using GNU Parallel.

The original motivation for the book was a desire to move away from purely GUI based approaches to data analysis (I think he means Excel and the Windows ecosystem). This is a common desire for data analysts, GUIs are very good for a quick look-see but once you start wanting to repeat analysis or even repeat visualisation they become more troublesome. And launching Excel just to remove a column of data seems a bit laborious. Windows does have its own command line, PowerShell, but it’s little used by data scientists. This book is about the Linux command line, examples are all available on a virtual machine populated with all of the tools discussed in the book.

The command line is at its strongest with the early steps of the data analysis process, getting data from places, carrying out relatively minor acts of tidying and answering the question “does my data look remotely how I expect it to look?”. Janssens introduces the battle tested tools sed, awk, and cut which we use around the office at ScraperWiki. He also introduces jq (the JSON parser), this is a more recent introduction but it’s great for poking around in JSON files as commonly delivered by web APIs. An addition I hadn’t seem before was csvkit which provides a suite of tools for processing CSV at the command line, I particularly like the look of csvstat. csvkit is a Python tool and I can imagine using it directly in Python as a library.

The style of the book is to provide a stream of practical examples for different command line tools, and illustrate their application when strung together. I must admit to finding shell commands deeply cryptic in their presentation with chunks of options effectively looking like someone typing a strong password. Data Science is not an attempt to clear the mystery of these options more an indication that you can work great wonders on finding the right incantation.

Next up is the Rio tool for using R at the command line, principally to generate plots. I suspect this is about where I part company with Janssens on his quest to use the command line for all the things. Systems like R, ipython and the ipython notebook all offer a decent REPL (read-evaluation-print-loop) which will convert seamlessly into an actual program. I find I use these REPLs for experimentation whilst I build a library of analysis functions for the job at hand. You can write an entire analysis program using the shell but it doesn’t mean you should!

Weka provides a nice example of smoothing the command line interface to an established package. Weka is a machine learning library written in Java, it is the code behind Data Mining: Practical Machine Learning Tools and techniques. The edges to be smoothed are that the bare command line for Weka is somewhat involved since it requires a whole pile of boilerplate. Janssens demonstrates nicely how to do this by developing automatically autocompletion hints for the parts of Weka which are accessible from the command line.

The book starts by pitching the command line as a substitute for GUI driven applications which is something I can agree with to at least some degree. It finishes by proposing the command line as a replacement for a conventional programming language with which I can’t agree. My tendency would be to move from the command line to Python fairly rapidly perhaps using ipython or ipython notebook as a stepping stone.

Data Science at the Command Line is definitely worth reading if not following religiously. It’s a showcase for what is possible rather than a reference book as to how exactly to do it.

Book review: Remote Pairing by Joe Kutner

 

jkrp_xlargecoverThis review was first published at ScraperWiki.

Pair programming is an important part of the Agile process but sometimes the programmers are not physically co-located. At ScraperWiki we have staff who do both scheduled and ad hoc remote working therefore methods for working together remotely are important to us. A result of a casual comment on Twitter, I picked up “Remote Pairing” by Joe Kutner which covers just this subject.

Remote Pairing is a short volume, less than 100 pages. It starts for a motivation for pair programming with some presentation of the evidence for its effectiveness. It then goes on to cover some of the more social aspects of pairing – how do you tell your partner you need a “comfort break”? This theme makes a slight reprise in the final chapter with some case studies of remote pairing. And then into technical aspects.

The first systems mentioned are straightforward audio/visual packages including Skype and Google Hangouts. I’d not seen ScreenHero previously but it looks like it wouldn’t be an option for ScraperWiki since our developers work primarily in Ubuntu; ScreenHero only supports Windows and OS X currently. We use Skype regularly for customer calls, and Google Hangouts for our daily standup. For pairing we typically use appear.in which provides audio/visual connections and screensharing without the complexities of wrangling Google’s social ecosystem which come into play when we try to use Google Hangouts.

But these packages are not about shared interaction, for this Kutner starts with the vim/tmux combination. This is venerable technology built into Linux systems, or at least easily installable. Vim is the well-known editor, tmux allows a user to access multiple terminal sessions inside one terminal window. The combination allows programmers to work fully collaboratively on code, both partners can type into the same workspace. You might even want to use vim and tmux when you are standing next to one another. The next chapter covers proxy servers and tmate (a fork of tmux) which make the process of sharing a session easier by providing tunnels through the Cloud.

Remote Pairing then goes on to cover interactive screensharing using vnc and NoMachine, these look like pretty portable systems. Along with the chapter on collaborating using plugins for IDEs this is something we have not used at ScraperWiki. Around the office none of us currently make use of full blown IDEs despite having used them in the past. Several of us use Sublime Text for which there is a commercial sharing product (floobits) but we don’t feel sufficiently motivated to try this out.

The chapter on “building a pairing server” seems a bit out of place to me, the content is quite generic. Perhaps because at ScraperWiki we have always written code in the Cloud we take it for granted. The scheme Kutner follows uses vagrant and Puppet to configure servers in the Cloud. This is a fairly effective scheme. We have been using Docker extensively which is a slightly different thing, since a Docker container is not a virtual machine.

Are we doing anything different in the office as a result of this book? Yes – we’ve got a good quality external microphone (a Blue Snowball), and it’s so good I’ve got one for myself. Managing audio is still something that seems a challenge for modern operating systems. To a human it seems obvious that if we’ve plugged in a headset and opened up Google Hangouts then we might want to talk to someone and that we might want to hear their voice too. To a computer this seems unimaginable. I’m looking to try out NoMachine when a suitable occasion arises.

Remote Pairing is a handy guide for those getting started with remote working, and it’s a useful summary for those wanting to see if they are missing any tricks.

Book review: Sextant by David Barrie

sextantThe longitude and navigation at sea has been a recurring theme over the last year of my reading. Sextant by David Barrie may be the last in the series. It is subtitled “A Voyage Guided by the Stars and the Men Who Mapped the World’s Oceans”.

Barrie’s book is something of a travelogue, each chapter starts with an extract from his diary on crossing the Atlantic in a small yacht as a (late) teenager in the early seventies. Here he learnt something of celestial navigation. The chapters themselves are a mixture of those on navigational techniques and those on significant voyages. Included in the latter are voyages such of those of Cook and Flinders, Bligh, various French explorers including Bougainville and La Pérouse, Fitzroy’s expeditions in the Beagle and Shackleton’s expedition to the Antarctic. These are primarily voyages from the second half of the 18th century exploring the Pacific coasts.

Celestial navigation relies on being able to measure the location of various bodies such as the sun, moon, Pole star and other stars. Here “location” means the angle between the body and some other point such as the horizon. Such measurements can be used to determine latitude, and in rather more complex manner, longitude. Devices such as the back-staff and cross-staff were in use during the 16th century. During the latter half of the 17th century it became obvious that one method to determine the longitude would be to measure the location of the moon relative to the immobile background of stars, the so-called lunar distance method. To determine the longitude to the precision required by the Longitude Act of 1714 would require those measurements to be made to a high degree of accuracy.

Newton invented a quadrant device somewhat similar to the sextant in the late 17th century but the design was not published until his death in 1742, in the meantime Hadley and Thomas Godfrey made independent inventions. A quadrant is an eighth of a circle segment which allows measurements up to 90 degrees. A sextant subtends a sixth of a circle and allows measurements up to 120 degrees.

The sextant of the title was first made by John Bird in 1757, commissioned by a naval officer who had made the first tests on the lunar distance method for determining the longitude at sea using Tobias Meyer’s lunar distance tables.

Both quadrant and sextant are more sophisticated devices than their cross- and back-staff precursors. They comprise a graduated angular scale and optics to bring the target object and reference object together, and to prevent the user gazing at the sun with an unprotected eye. The design of the sextant changed little since its invention. As a scientist who has worked with optics they look like pieces of modern optical equipment in terms of their materials, finish and mechanisms.

Alongside the sextant the chronometer was the second essential piece of navigational equipment, used to provide the time at a reference location (such as Greenwich) to compare to local time to get the longitude. Chronometers took a while to become a reliable piece of equipment, at the end of Beagles 4 year voyage in 1830 only half of the 22 chronometers were still running well. Shackleton’s mission in 1914 suffered even more, with the final stretch of their voyage to South Georgia using the last working of 24 chronometers. Granted his ship, the Endeavour had been broken up by ice and they had escaped to Elephant Island in a small, open boat! Note the large numbers of chronometers taken on these voyages of exploration.

Barrie is of the more subtle persuasion in the interpretation of the history of the chronometer. John Harrison certainly played a huge part in this story but his chronometers were exquisite, expensive, unique devices*. Larcum Kendall’s K1 chronometer was taken by Cook on his 1769 voyage. Kendall was paid a total of £500 for this chronometer, made as a demonstration that Harrison’s work could be repeated. This cost should be compared to a sum of £2800 which the navy paid for the HMS Endeavour in which the voyage was made!

An amusing aside, when the Ordnance Survey located the Scilly Isles by triangulation in 1797 they discovered its location was 20 miles from that which had previously been assumed. Meaning that prior to their measurement the location of Tahiti was better known through the astronomical observations made by Cook’s mission.

The risks the 18th century explorers ran are pretty mind-boggling. Even if the expedition was not lost – such as that of La Pérouse – losing 25% of the crew was not exceptional. Its reminiscent of the Apollo moon missions, thankfully casualties were remarkably low, but the crews of the earlier missions had a pretty pragmatic view of the serious risks they were running.

This book is different from the others I have read on marine navigation, more relaxed and conversational but with more detail on the nitty-gritty of the process of marine navigation. Perhaps my next reading in this area will be the accounts of some of the French explorers of the late 18th century.

*In the parlance of modern server management Harrison’s chronometers were pets not cattle!

Book review: Graph Databases by Ian Robinson, Jim Webber and Emil Eifrem

graphdatabases

This review was first posted at ScraperWiki.

Regular readers will know I am on a bit of a graph binge at the moment. In computer science and mathematics graphs are collections of nodes joined by edges, they have all sorts of applications including the study of social networks and route finding. Having covered graph theory and visualisation, I now move on to graph databases. I started on this path with Seven Databases in Seven Weeks which introduces the Neo4j graph database.

And so to Graph Databases by Ian Robinson, Jim Webber and Emil Eifrem which, despite its general title, is really a book about Neo4j. This is no big deal since Neo4j is the leading open source graph database.

This is not just random reading, we’re working on an EU project, NewsReader, which makes significant use of RDF – a type of graph-shaped data. We’re also working on a project for a customer which involves traversing a hierarchy of several thousand nodes. This leads to some rather convoluted joining operations when done on a SQL database, a graph database might be better suited to the problem.

The book starts with some definitions, identifying the types of graph database (property graph, hypergraph, RDF). Neo4j uses property graphs where nodes and edges are distinct items and each can hold properties. In contrast RDF graphs are expressed as triples which encompass both edges and nodes. In hypergraphs multiple edges can be expressed as a single item. A second set of definitions are regarding the types of graph processing system: graph databases and graph analytical engines. Neo4j is designed to provide good performance for database-like queries, acting as a backing store for a web application rather than an analytical engine to carry out offline calculations. There’s also an Appendix comparing NoSQL databases which feels like it should be part of the introduction.

A key feature of native graph databases, such as Neo4j, is “index-free adjacency”. The authors don’t seem to define this well early in the book but later on whilst discussing the internals of Neo4j it is all made clear: nodes and edges are stored as fixed length records with references to a list of nodes to which they are connected. This means its very fast to visit a node, and then iterate over all of its attached neighbours. The alternative index-based lookups may involve scanning a whole table to find all links to a particular node. It is in the area of traversing networks that Neo4j shines in performance terms compared to SQL.

As Robinson et al emphasise in motivating the use of graph databases: Other types of NoSQL database and SQL databases are not built fundamentally around the idea of relationships between data except in quite a constrained sense. For SQL databases there is an overhead to carrying out join queries which are SQLs way of introducing relationships. As I hinted earlier storing hierarchies in SQL databases leads to some nasty looking, slow queries. In practice SQL databases are denormalised for performance reasons to address these cases. Graph databases, on the other hand, are all about relationships.

Schema are an important concept in SQL databases, they are used to enforce constraints on a database i.e. “this thing must be a string” or “this thing must be in this set”. Neo4j describes itself as “schema optional”, the schema functionality seems relatively recently introduced and is not discussed in this book although it is alluded to. As someone with a small background in SQL the absence of schema in NoSQL databases is always the cause of some anxiety and distress.

A chapter on data modelling and the Cypher query language feels like the heart of the book. People say that Neo4j is “whiteboard friendly” in that if you can draw a relationship structure on a whiteboard then you can implement it in Neo4j without going through the rigmarole of making some normalised schema that doesn’t look like what you’ve drawn. This seems fair up to a point, your whiteboard scribbles do tend to be guided to a degree by what your target system is, and you can go wrong with your data model going from whiteboard to data model, even in Neo4j.

I imagine it is no accident that more recent query languages like Cypher and SPARQL look a bit like SQL. Although that said, Cypher relies on ASCII art to MATCH nodes wrapped in round brackets and edges (relationships) wrapped in square brackets with arrows –>  indicating the direction of relationships:

MATCH (node1)-[rel:TYPE]->(node2)
RETURN rel.property

which is pretty un-SQL-like!

Graph databases goes on to describe implementing an application using Neo4j. The example code in the book is in Java but there appears, in py2neo, to be a relatively mature Python client. The situation here seems to be in flux since searching the web brings up references to an older python-embedded library which is now deprecated. The book pre-dates Neo4j 2.0 which introduced some significant changes.

The book finishes with some examples from the real world and some demonstrations of popular graph theory analysis. I liked the real world examples of a social recommendation system, access control and parcel routing. The coverage of graph theory analysis was rather brief, and didn’t explicit use Cypher which would have made the presentation different from what you find in the usual graph theory textbooks.

Overall I have mixed feelings about this book: the introduction and overview sections are good, as is the part on Neo4j internals. It’s a rather slim volume, feels a bit disjointed and is not up to date with Ne04j 2.0 which has significant new functionality.  Perhaps this is not the arena for a dead-tree publication – the Neo4j website has a comprehensive set of reference and tutorial material, and if you are happy with a purely electronic version than you can get Graph Databases for free (here).