Category: Book Reviews

Reviews of books featuring a summary of the book and links to related material

Book review: Learning SPARQL by Bob DuCharme

learningsparql

This review was first published at ScraperWiki.

The NewsReader project on which we are working at ScraperWiki uses semantic web technology and natural language processing to derive meaning from the news. We are building a simple API to give access to the NewsReader datastore, whose native interface is SPARQL. SPARQL is a SQL-like query language used to access data stored in the Resource Description Framework format (RDF).

I reach Bob DuCharme’s book, Learning SPARQL, through an idle tweet mentioning SPARQL, to which his book account replied. The book covers the fundamentals of the semantic web and linked data, the RDF standard, the SPARQL query language, performance, and building applications on SPARQL. It also talks about ontologies and inferencing which are built on top of RDF.

As someone with a slight background in SQL and table-based databases, my previous forays into the semantic web have been fraught since I typically start by asking what the schema for an RDF store is. The answer to this question is “That’s the wrong question”. The triplestore is the basis of all RDF applications, as the name implies each row contains a triple (i.e. three columns) which are traditionally labelled subject, predicate and object. I found it easier to think in terms of resource, property name and property value. To give a concrete example “David Beckham” is an example of a resource, his height is the name of a property of David Beckham and, according to dbpedia, the value of this property is 1.8288 (metres, we must assume). The resource and property names must be provided in the from of URIs (unique resource identifiers) the property value can be a URI or some normally typed entity such as a string or an integer.

The triples describe a network of nodes (the resource and property values) with property names being the links between them, with this infrastructure any network can be described by a set of triples. SPARQL is a query language that superficially looks much like SQL. It can extract arbitrary sets of properties from the network using the SELECT command, get a valid sub-network described by a set of triples using the CONSTRUCT command, answer a question with a Yes/No answer using the ASK command. And it can tell you “everything” it knows about a particular URI using the DESCRIBE command, where “everything” is subject to the whim of the implementor. It also supports a bunch of other commands which feel familiar to SQListas such as LIMIT, OFFSET, FROM, WHERE, UNION, ORDER BY, GROUP BY, and AS. In addition there are the commands BIND which allows the transformation of variables by functions and VALUES which allows you to make little data structures for use within queries. PREFIX provides shortcuts to domains of URIs, for example http://dbpedia.org/resource/David_Beckham can be written dbpedia:David_Beckham, where dbpedia: is the prefix. SERVICE allows you to make queries across the internet to other SPARQL providers. OPTIONAL allows the addition of a variable which is not always present.

The core of a SPARQL query is a list of triples which act as selectors for the triples required and FILTERs which further filter the results by carrying out calculations on the individual members of the triple. Each selector triple is terminated with “ .” or a “ ;” which indicates that the next triple is as a double with the first element the same as the current one. I mention this because Googling for the meaning of punctuation is rarely successful.

Whilst reading this book I’ve moved from SPARQL querying by search, to queries written by slight modification of existing queries to celebrating writing my own queries, to writing successful queries no longer being a cause for celebration!

There are some features in SPARQL that I haven’t yet used in anger: “paths” which are used to expand queries to not just select a triple define a node with a link but longer chains of links and inferencing. Inferencing allows the creation of virtual triples. For example if we known that Brian is the patient of a doctor called Jane, and we have an inferencing engine which also contains the information the a patient is the inverse of a doctor then we don’t need to specify that Jane has a patient called Brian.

The book ends with a cookbook of queries for exploring a new data source which is useful but needs to be used with a little caution when query against large databases. Most of the book is oriented around running a SPARQL client against files stored locally. I skipped this step, mainly using YASGUI to query the NewsReader data and the SNORQL interface to dbpedia.

Overall summary, a readable introduction to the semantic web and the SPARQL query language.

If you want to see the fruits of my reading then there are still places available on the NewsReader Hack Day in London on 10th June.

Sign up here!

Book review: The Undercover Economist Strikes Back by Tim Harford

undercover_economistWhat have been reading?
Tim Harford’s latest book "The Undercover Economist Strikes Back". It’s about macroeconomics, a sort of blaggers guide.

Who’s Tim Harford?

Tim Harford is a writer and broadcaster. I’ve also read his books The Undercover Economist, about microeconomics and Adapt, about trial and error in business, government and aid. When I get the time I listen to his radio programme More or Less, about statistics and numbers, and also read his newspaper column.

Hey, what’s going on here? You keep writing down the questions I’m asking!
Yes, this is how Strikes Back is written. At the beginning I found it a bit irritating but as you can see I’ve taken to it. It recalls the method Socratic dialogue and Galileo’s book, Dialogue Concerning the Two Chief World Systems. The advantage is that it structures the text very nicely and is likely rather SEO friendly.

OK, I’ll play along – tell me more about the book

The book starts by introducing Bill Philips and his MONIAC machine, which simulated the economy, in macroeconomic terms, using water, pipes, tanks and valves.

That’s a bizarre idea, why didn’t he use a computer?

Philips was working in the period immediately after the Second World War and computers weren’t that common. Also, it turns out that solving certain types of equations is more easily done using an analogue computer – such as MONIAC.

Back up a bit, what’s macroeconomics?

Macroeconomics is the study of the large scale features of the economy such as the growth in Gross Domestic Product (GDP), unemployment, inflation and so forth. Contrast this to microeconomics which is about how much you pay for your cup of tea (and other things).

What’s the point of this, didn’t someone describe economics as the “dismal science”?

Yes, they did but this is treating economics a little unfairly. One of Harford’s pleas in the book is to accept the humanity of economists. They aren’t just interested in numbers, they are interested in making numbers work for people. In particular, unemployment is recognised as a great ill which should be minimised and the argument is over how this should be achieved rather than whether it should be achieved.

Tell me something about macroeconomics

There is a great divide in economics between the Keynesians and the classical economists. The crux of their divide is how they treat a recession. The former believe that the economy needs stimulus in times of recession, in terms of of increased “printing of money”. The latter believe that the economy is a well-oiled machine that is de-railed by external shocks, in happy times there are other external shocks that pass off relatively benignly. The classicist are less keen on stimulus believing that the economy will sort itself out naturally as it responds to the external shocks. These approaches can be captured in toy economies.

Tim Harford cites two examples: a babysitting collective in Washington DC and the economy of a prisoner of war camp. The former is a case of a malfunctioning economy fixed by Keynesian means: the collective worked by parents agreeing to babysit in exchange for vouchers which represented a period of babysitting. But the amount of vouchers available was limited so parents were reluctant to spend their babysitting vouchers for a night out because they were scarce. In the first instance this was resolved by printing more baby sitting vouchers: a Keynesian stimulus.

The prisoner of war camp suffered a different problem, towards the end of the war the price of goods went up as the supply of Red Cross parcels dried up. Here there was nothing to be done, the de facto unit of currency was the cigarette and there was a limited supply of them and nothing could be done to increase that supply.

It’s all about money, isn’t it?

Yes, Harford highlights that money fulfils three different functions. It’s a medium of exchange, to save us from bartering. It’s a store of value, we can keep money under the bed for the future – something we couldn’t do with our valuable goods if they were valuable. And it is a “unit of account”, a way of summing up your net worth over a range of assets.

Is The Undercover Economist Strikes Back worth reading?

I’d say a definite “yes”. We’ve all been watching macroeconomics playing out in lively form over the last few years as the recession hit and is now receding. Harford gives a clear, intelligent guide to the issues at hand and some of the background that is left unstated by politicians and in the news. Harford points out that our political habits don’t really match our economic needs. Ideally we would have abstemious, right-wing governments in the boom years and somewhat more spendthrift left-wing ones during recessions. He ends with a call for more experimentation in macroeconomics, harking back to his book Adapt. And also highlights some shortcomings of macroeconomics as studied today: it does not consider behavioural economics, complexity theory or even banks.

There’s much more in the book than I’ve summarised here.

Book review: Data Science for Business by Provost and Fawcett

datascienceforbusiness

This review was first published at ScraperWiki.

Marginalia are an insight into the mind of another reader. This struck me as a I read Data Science for Business by Foster Provost and Tom Fawcett. The copy of the book had previously been read by two of my colleagues. One of whom had clearly read the introductory and concluding chapters but not the bit in between. Also they would probably not be described as a capitalist, “red in tooth and claw”! My marginalia have generally been hidden since I have an almost religious aversion to defacing a book in any way. I do use Evernote to take notes as I go though, so for this review I’ll reveal them here.

Data Science for Business is the book I wasn’t going to read since I’ve already read Machine Learning in Action, Data Mining: Practical Machine Learning Tools and Techniques, and Mining the Social Web. However, I gave in to peer pressure. The pitch for the book is that it is for people who will manage data scientists rather than necessarily be data scientists themselves. The implication here is that you’re paying these data scientists to increase your profits, so you better make sure that’s what they’ll do. You need to be able to understand what data science can and cannot do, ask reasonable questions of data scientists of their models and understand the environment the data scientist needs to thrive.

The book covers several key algorithms: decision trees, support vector machines, logistic regression, k-Nearest Neighbours and term frequency-inverse document frequency (TF-IDF) but not in any great depth of implementation. To my mind it is surprisingly mathematical in places, given the intended audience of managers rather than scientists.

The strengths of the book are in the explanations of the algorithms in visual terms, and in its focus on the expected value framework for evaluating data mining models. Diversity of explanation is always a good thing; read enough different explanations and one will speak directly to you. It also spends more of its time discussing practical applications than other books on data mining. An example on “churn” runs through the book. “Churn” is the loss of customers at the end of a contract, in this case the telecom industry is used as an illustration.

A couple of nuggets I picked up:

  • You can think of different machine learning algorithms in terms of the decision boundary they produce and how that looks. Overfitting becomes a decision boundary which is disturbingly intricate. Support vector machines put the decision boundary as far away from the classes they separate as possible;
  • You need to make sure that the attributes that you use to build your model will be available at the point of use. That’s to say there is no point in building a model for churn which needs an attribute from a customer which is only available just after they’ve left you. Sounds a bit obvious but I can easily see myself making this mistake;
  • The expected value framework for evaluating models. This combines the probability of an event, i.e. the result of a promotion campaign with the value of the outcome. Again churn makes a useful demonstration. If you have the choice between a promotion which is successful with 10 users with an average spend of £10 per year or 1 user with an average spend of £200 then you should obviously go with the latter rather than the former. This reminds me of expectation values in quantum mechanics and in statistical physics.

The title of the book, and the related reading demonstrate that data science, machine learning and data mining are used synonymously. I had a quick look at the popularity of these terms over the last few years. You can see the results in the Google Ngram viewer here. Somewhat to my surprise data science still lags far behind other terms despite the recent buzz, this is perhaps because Google only expose data to 2008.

Which book should you read?

All of them!

If you must buy only one then make it Data Mining, it is encyclopaedic and covers high level business overview, toy implementation and detailed implementation in some depth. If you want to see the code, then get Machine Learning in Action – but be aware that ultimately you are most likely going to be using someone else’s implementation of the core machine learning algorithms. Mining the Social Web is excellent if you want to see the code and are particularly interested in social media. And read Data Science for Business if you are the intended managerial audience or one who will be doing data mining in a commercial environment.

Book review: The Signal and the Noise by Nate Silver

thesignalandthenoise

This review was first published at Scraperwiki.

Nate Silver first came to my attention during the 2008 presidential election in the US. He correctly predicted the outcome of the November results in 49 of 50 states, missing only on Indiana where Barack Obama won by just a single percentage point. This is part of a wider career in prediction: aside from a job at KPMG (which he found rather dull), he has been a professional poker player and run a baseball statistics website.

His book The Signal and the Noise: The Art and Science of Prediction looks at prediction in a range of fields: economics, disease, chess, baseball, poker, climate, weather, earthquakes and politics. It is both a review and a manifesto; a manifesto for making better prediction.

The opening chapter is on the catastrophic miscalculation of the default rate for collateralized debt obligations (CDO) which led in large part to the recent financial crash. The theme for this chapter is risk and uncertainty. In this instance, risk means the predicted default rate for these financial instruments and uncertainty means the uncertainty in those predictions. For CDOs the problem was that the estimates of risk were way off, and there was no recognition of the uncertainty in those risk estimates.

This theme of unrecognised uncertainty returns for the prediction of macroeconomic indicators such as GDP growth and unemployment. Here again forecasts are made, and practitioners in the field know these estimates are subject to considerable uncertainty, but the market for prediction, and simple human frailty mean that these uncertainties are ignored. I’ve written elsewhere on both the measurement and prediction of GDP. In brief, both sides of the equation are so fraught with uncertainty that you might as well toss a coin to predict GDP!

The psychology of prediction returns with a discussion of political punditry and the idea of “foxes” and “hedgehogs”. In this context “hedgehogs” are people with one Big Idea who make their predictions based on their Big Idea and are unshiftable from it. In the political arena the “Big Idea” may be ideology but as a scientist I can see that science can be afflicted in the same way. To a man with a hammer, everything is a nail. “Foxes”, on the other hand, are more eclectic and combine a range of data and ideas to predict, as a result they are more successful.

In some senses, the presidential prediction is a catching chickens in a coop exercise. There is quite a bit of data to use and your fellow political pundits are typically misled by their own prejudices, they’re “hedgehogs”, so all you, the “fox” needs to do is calculate in a fairly rational manner and you’re there. Silver returns to the idea of diversity in prediction, combining multiple models, or data of multiple types in his manifesto for better prediction.

There are several chapters on scientific prediction, looking at the predictions of earthquakes, the weather, climate and disease. There is not an overarching theme across these chapters. The point about earthquakes is that precise prediction about where, when and how big appears to be impossible. At short range, the weather is predictable but, beyond a few days, seasonal mean predictions and “the same as today” predictions are as good as the best computer simulations.

Other interesting points raised by this chapter are the ideas of judging predictions by accuracy, honesty and economic value. Honesty in this sense means “is this the best model I can make at this point?”. The interesting thing about weather forecasts in the US is that the National Weather Service makes the results of simulations available to all. Big national value-adders such as the Weather Channel produce a “wet bias” forecast which systematically overstates the possibility of rain for lower values of the chances of rain. This is because customers prefer to be told that there is, say, a 20% chance of rain when it actually turns out to be dry, than be told the actual chance of rain (say 5%) and for it to rain. This “wet bias” gives customers what they want.

Finally, there are predictions on games. On poker, baseball, basketball and chess. The benefits of these systems are the large amounts of data available. The US in particular has a thriving “gaming statistics” culture. The problems are closed, in the sense that there are a set of known rules to each game. And finally, there is ample opportunity for testing predictions against outcomes. This final factor is important in improving prediction.

In technical terms, Silver’s call to arms is for the Bayesian approach to predictions. With this approach, an attempt is made to incorporate prior beliefs into prediction through the use of an estimate of the prior probability and Bayes’ Theorem. Silver exemplifies this with a discussion of prediction in poker. Essentially, poker is about predicting your opponents’ hands and their subsequent actions based on those cards. To be successful, poker players must do this repeatedly and intuitively. However, luck still plays a large part in poker and only the very best players make a profit long term.

The book is structured around a series of interviews with workers in the field, and partly as a result of this it reads rather nicely – it’s not just a dry exposition of facts, but has some human elements too. Though it’s no replacement for a technical reference, it’s worth as a survey of prediction, as well as for the human interest stories.

Book review: Darwin’s Ghosts by Rebecca Stott

darwinsghosts_bookcoverCharles Darwin’s On the Origin of Species was rushed into print after a very long gestation when it became clear that Alfred Russell Wallace was close to publishing the same ideas on evolution. Lacking from the first edition was a historical overview of what went before, pertinent to the ideas of evolution. On the occasion of the publication of the first American edition, Darwin took the opportunity to address the lack. Darwin’s Ghosts: In search of the first evolutionists by Rebecca Stott is a modern look at those influences.

After an introductory, motivating chapter Darwin’s Ghosts works in approximately chronological order.  Each chapter introduces a person, or group of people, who did early work in areas of biology which ultimately related to evolution. The first characters introduced are Aristotle, and then Jahiz, a Persian scholar working around 860AD. Aristotle brought systematic observation to biology, a seemingly basic concept which was not then universal. He wrote The History of Animals in about 350BC. The theme of systematic observation and experimentation continues through the book. Jahiz extended Aristotle’s ideas to include interactions of species, or webs. His work is captured in The Book of Living Beings.

Next up was a curiosity over fossils, and the inklings that things had not always been as they were now. Leonardo Da Vinci (1452-1519) and, some time later, Bernard Palissy (1510-1590) are used to illustrate this idea. Everyone has heard of da Vinci. Palissy was a Hugenot who lived in the second half of the 16th century. He was a renowned potter, and commissioned by Catherine de Medici to build the Tuileries gardens in Paris but in addition he lectured on natural sciences.

I must admit to being a bit puzzled at the introduction of Abraham Trembley (1710-1784), he was the tutor of two sons of a prominent Dutch politician. He worked on hydra, a very simple aquatic organism and his wikipedia page credits him as being one of the first experimental zoologists. He discovered that whole hydra could regenerated from parts of a “parent”.

Conceptually the next developments were in hypothesising a great age for the earth coupled to ideas that species were not immutable, they change over time. Benoît de Maillet (1656-1739) wrote on this but only posthumously. Similarly Robert Chambers (1802-1871) was to write anonymously about evolution in Vestiges of the Natural History of Creation first published in 1844. Note that this publication date is only 15 years before the first publication of the Origin of Species.

The reasons for this reticence on the part of a number of writers is that these ideas of mutability and change collide with major religions, they are “blasphemous”. This becomes a serious issue over the years spanning 1800. Erasmus Darwin, Charles’s grandfather, was something of an evolutionist but wrote relatively cryptically about it for fear of his career as a doctor. I reviewed Desmond King-Hele’s biography of Erasmus Darwin some time ago. At the time when Erasmus wrote evolution was considered a radical idea, both in political and religious senses. This at a time when the British establishment was feeling vulnerable following the Revolution in France and the earlier American revolution.

I have some sympathy with the idea that religion suppressed evolutionary theory, however it really isn’t as simple as that. The part religion plays is as a support to wider cultural and political movements.

The core point of Darwin’s Ghosts is that a scientist working in the first half of the 19th century was standing on the shoulders of giants or at least on top of a pile of people the lowest strata of which date back a couple of millennia. Not only this, they are not on an isolated pinnacle, around them are others also standing. Culturally we are fond of stories of lone geniuses but practically they don’t exist.

In fact the theory of evolution is a nice demonstration of this interdependence – Darwin was forced to publish his theory because Wallace had essentially got the gist of it entirely independently – his story is the final chapter in the book. For Wallace the geographic ranges of species were a key insight into forming the theory. A feature very apparent in the area of southeast Asia where he was working as a freelance specimen collector.

Once again I am caught out by my Kindle – the book proper ends at 66% of the way through, although Darwin’s original essay is included as an appendix taking us to 70%. Darwin’s words are worth reading, if only for his put-down of Richard Owen for attempting to claim credit for evolutionary theory, despite being one of those who had argued against it previously.

I enjoyed this book, much of my reading is scientific mono-biography which misses the ensemble nature of science which this book demonstrates.