Tag: data science

Book review: An Introduction to Geographical Information Systems by Ian Heywood et al

HeywoodI’ve been doing quite a lot of work around Geographical Information Systems recently. So I thought I should get some background understanding to avoid repeating the mistakes of others. I turned to An Introduction to Geographic Information Systems by Ian Heywood, Sarah Cornelius and Steve Carver, now in its fourth edition.

This is an undergraduate text, the number of editions suggests it to be a good one. The first edition of Introduction was published in 1998 and this shows in the content, much of the material is rooted in that time with excursions into more recent matters. There is mention of CRT displays and Personal Data Assistants (PDA). This edition was published in 2011, obviously quite a lot of new material has been added since the first edition but it clearly forms the core of the book.

I quite deliberately chose a book that didn’t mention the latest shiny technologies I am currently working with (QGIS, Open Layers 3, spatial extensions in MariaDB) since that sort of stuff ages fast and the best, i.e. most up to date, information is on the web.

GIS allows you to store spatially related data with the ability to build maps using layers of different content and combine this spatial data with attributes stored in databases.

Early users were governments both local and national and their agencies, who must manage large amounts of land. These were followed by utility companies who had geographically distributed infrastructure to manage. More recently retail companies have become interested in GIS as a way of optimising store location and marketing. The application of GIS is frequently in the area of “decision support”, along the lines of “where should I site my…?” Although, “how should I get around these locations?” is also a frequent question. And with GPS for route finding arguably all of us carry around a GIS, and they are certainly important to logistics companies.

From the later stages of the book we learn how Geographic Information Systems were born in the mid to late 1960s became of increasing academic interest through the 1970s, started to see wider uptake in the eighties and became a commodity in the nineties. With the advent of Google Maps and navigation apps on mobile phones GIS is now ubiquitous.

I find it striking that the Douglas-Peucker algorithm for line simplification, born in the early seventies, is recently implemented in my favoured spatially enabled database (MariaDB/MySQL). These spatial extensions in SQL appear to have grown out of a 1999 standard from the OGC (Open Geospatial Consortium). Looking at who has implemented the standards is a good way of getting an overview of the GIS market.

The book is UK-centric but not overwhelmingly so, we learn about the Ordnance Survey mapping products and the UK postcode system, and the example of finding a site for a nuclear waste repository in the UK is a recurring theme.

Issues in GIS have not really changed a great deal, projection and coordinate transforms are still important, and a source of angst (I have experienced this angst personally!). We still see digitisation and other data quality issues in digitized data, although perhaps the source is no longer the process of manual digitization from paper but of inconsistency in labelling and GPS errors.

One of the challenges not discussed in Introduction is the licensing of geographic data, this has recently been in the news with the British government spending £5 million to rebuild an open address database for the UK, having sold off the current one with the Royal Mail in 2013. (£5 million is likely just the start). UN-OCHA faces similar issues in coordinating aid in disaster areas, the UK is fairly open in making details of administrative boundaries within the UK available electronically but this is not the case globally.

I have made some use of conventional GIS software in the form of QGIS which although powerful, flexible and capable I find slow and ugly. I find it really hand for a quick look-see at data in common geospatial formats. For more in-depth analysis and visualisation I use a combination of spatial extensions in SQL, Python and browser technology.

I found the case studies the most useful part of this book, these are from a wide range of authors and describe real life examples of the ideas discussed in the main text. The main text uses the hypothetical ski resort of Happy Valley as a long running example. As befits a proper undergraduate introduction there are lots of references to further reading.

Despite its sometimes dated feel Introduction to Geographic Information Systems does exactly what it says on the tin.

Book review: Artificial intelligence for Humans: Volume 3 Deep Learning and Neural Networks by Jeff Heaton

heaton-vol3Deep learning and neural networks are receiving more attention these days, you may have seen the nightmarish images generated using this technology by Google Research. I picked up Artificial Intelligence for Humans: Volume 3 Deep Learning and Neural Networks by Jeff Heaton to find out more since the topic fits in with my interests in data science and machine learning. There doesn’t seem to be much in the way of accessible, book length treatments of this relatively new topic. Most other offerings on Amazon have publication dates in the future.

It turns out that Artificial Intelligence for Humans is the result of a Kickstarter campaign, so far the author has funded three volumes on artificial intelligence by this route: two of them for around $18,000 and on for around $10,000. I paid £16 for the physical book which seems like a reasonable price. I think it is a pretty well polished product, it doesn’t quite reach the editing and production levels of a publisher like O’Reilly but it is at least as good as other technical publishers. The accompanying code examples and web site are really nicely done.

Neural networks have been around for a long time, since the 1940s, and see period outbreaks of interest and enthusiasm. They are modelled, loosely on the workings of biological brains with “neurons” connected together with linkages of different weights which can be trained to perform tasks such as image recognition, classification and regression. The “neurons” are grouped into layers with an input layer, where data enters, feeding into potentially multiple successive “hidden” layers finally leading to an output layer of neurons where results are read off. The output of a neuron is calculated by summing its inputs multiplied by the weights of the inputs and feeding the result through an “activation function”. The training process is used to optimise the weights, and may also evolve the structure of the network.

I remember playing with neural networks in the 1980s, typing a programme into my Amstrad CPC464 from a magazine which recognised handwritten digits, funnily enough this is still the go to demonstration of neural networks! In the past neural networks have not gained traction because of the computational demands of training. This problem appears to have been solved with new algorithms and GPU-based computation. A second innovation is the introduction of techniques to evolve the structure of neural networks to do “deep learning”.

Much of what is presented is familiar to me from my reading on machine learning (supervised and unsupervised learning, regression and classification), image analysis (convolution filters), and old-fashioned optimisation (stochastic gradient descent, Levenberg-Marquardt, genetic algorithms and stimulated annealing). It does lead me to wonder sometimes whether there is nothing new under the sun and that many of these techniques are simply different fields of investigation re-casting the same methods in their own language. For example, the LeNET-5 networks used in image analysis contain convolution layers which act exactly like convolution filters in normal image analysis, the max pool layers have the effect of downscaling the image. The combination of these one would anticipate to give the same effect as multi-scale image processing techniques.

The book provides a good summary on the fundamentals of neural networks, how they are built and trained, what different variants are called and then goes on to talk in more detail about the new stuff in deep learning. It turns out the label “deep” is applied to neural networks with more than two layers, which isn’t a particularly high bar. It isn’t clear whether this is two layers including the input and output layers or two layers of hidden neurons. I suspect it is the latter. These “deep” networks are typically generated automatically.

As the author highlights, with the proliferation of easy to use machine learning and neural network libraries the problem is no longer the core algorithm rather it is the selection of the right model for your particular problem and optimising the learning and evaluation strategy. As a Pythonista it looks like the way to go is to use the NoLearn and Lasagna libraries. A measure of this book is that when I go to look at the documentation for these projects the titles at least make sense.

The author finishes off with a description of his experience with doing a Kaggle challenge. I’ve done this, it’s a great way of getting some experience in machine learning techniques on nearly real problems. I thought the coverage was a bit brief but it highlighted how neural networks are used in combination with other techniques.

This isn’t an in depth book, but it introduces all the useful vocabulary and the appropriate libraries to start work in this area. And as a result I’m off to try t-SNE on a problem I’m working on, and then maybe try some analysis using the Lasagna library.

Book review: Pro Git by Scott Chacon and Ben Straub

progitPro Git by Scott Chacon and Ben Straub is available to download for free, or to read online at the website but you can buy a paper copy if you prefer. I downloaded an read it on my tablet. Pro Git is the bible of all things relating to git, the distributed version control system. This is an application to record the history of changes to your computer code, or any other plain test file. Such applications are essential if you are a software company producing code commercially, or if you are collaborating on an open source project. They are also useful, if you use code in analysis or modelling, as I do.

Git is most famous as the creation of Linus Torvalds in support of the development of the Linux operating system. For developers version control is a fundamental activity which crosses all boundaries of domain and language. Git is one of the more recent examples in a line of version control systems, my former colleague Francis Irving wrote very nicely about this subject.

My adventures with source control extend over 20 years although it is fair to say that I didn’t really use them in anger until I worked at ScraperWiki. There my usage moved from being a safety line for work that only really impacted me, to a collaborative tool. I picked up my usage of git through pairing with other people, and through explicitly stated conventions for using git in a developing team. Essentially one of the other developers told us off if he thought our commit messages were not up to scratch! This is a good thing. This culturally determined use of git is important in collaborative environments.

My interest in git has recently been re-awoken for a couple of reasons: my new job means I’m doing a lot of coding, and I discovered GitKraken which is a blingy new git client. I’ve not used a graphical git client before but GitKraken is very pretty and the GUI invites you to discover more git functionality. Sadly it doesn’t work on my work PC, and once it leaves beta I have no idea what the price might be.

Pro Git starts with an introduction to git and the basics of getting up and running. It then goes on to describe how to use git in collaborative environments, how to use git with GitHub and then more advanced topics such as how to write hooks, and use git as a client to Subversion (an earlier source control system). Coverage feels pretty complete to me, it’s true that you might resort to Stackoverflow to answer some questions but that’s universally true in coding.

The book finishes with a chapter on git internals, what is going on under the hood as you issue commands. Git has a famous division between “porcelain” and “plumbing” commands. Plumbing is what really get things done, low level commands with somewhat opaque meaning whilst porcelain is the stuff you use day to day. The internals chapter starts by showing how the plumbing works by reproducing the effects of some of the porcelain commands. This is surprisingly informative, and built my confidence a bit – I always have some worry that I will lose something irrevocably by issuing the wrong command in git. These dangers exist but Pro Git is clear where they lie.

Here are a couple of things I’ve already started using on reading this book:

git log –since=1.week

– filter the log to just show the commits made in the last week, other time options are available. Invaluable for weekly reporting!

git describe

– make a human readable (sort of) build number based on the most recent tag and how far you are along from it.

And there are some things I used to wonder about. First of all I should consider commits as a tree structure, with branches pointers to particular commits. In this context HEAD^ refers to the parent commit of the current HEAD, or latest commit. HEAD~2 refers to the grandparent of the current commit, and so on. I now have some appreciation of soft, mixed and hard resets. Hard is bad, it could lose your work!

I now know why git filter-branch was spoken of in hushed tones in the ScraperWiki office, basically because it allows you to systematically rewrite the history of a repository which is sort of really wrong in source control.

Pro Git is good in outlining not only what you can do but also what you should do. For example, one has the choice with git to merge different branches or to carry out a rebase. I’d always been a bit vague on the difference between these two things but Pro Git explains clearly, and also tells you when you shouldn’t use rebase (when other people have seen the commits you are rebasing).

My electronic edition on Kindle does suffer from the occasional glitch with some paragraphs appearing twice but the writing is clear and natural. Pro Git can’t be beaten for the price and it is probably worth the  £32 Amazon charge for a paper copy.

Book review: Risk assessment and Decision Analysis with Bayesian Networks by N. Fenton and M. Neil

riskassessmentAs a new member of the Royal Statistical Society, I felt I should learn some more statistics. Risk Assessment and Decision Analysis with Bayesian Networks by Norman Fenton and Martin Neil is certainly a mouthful but despite its dry title it is a remarkably readable book on Bayes Theorem and how it can be used in risk assessment and decision analysis via Bayesian Networks.

This is the “book of the software”, the reader gets access to the “lite” version of the author’s AgenRisk software, the website is here. The book makes heavy use of the software in terms of presenting Bayesian networks and also in the features discussed. This is no bad thing, the book is about helping people who analyse risk or build models to do their job rather than providing a deeply technical presentation for those who might be building tools or doing research in the area of Bayesian Networks. With access to AgenRisk the reader can play with the examples provided and make a rapid start on their own models.

The book is divided into three large sections. The first six chapters provide an introduction to probability, and the assessment of risk (essentially working out the probability of a particular outcome). The writing is pretty clear, I think its the best explanation of the null hypothesis and p-values that I’ve read. The notorious “Monty Hall” problem is introduced. It then goes into Bayes’ theorem in more depth.

Bayes Theorem originates in the writings of Reverend Bayes published posthumously in 1763. It concerns conditional probability, that is to say the likelihood that a hypothesis H is true given evidence E written P(H|E). The core point being that we might have the inverse of what we want: an understanding of the likelihood of evidence given a hypothesis, P(E|H). Bayes Theorem gives us a route to calculate P(H|E) given P(E|H), P(E) and P(H). The second benefit here is that we can codify our prejudices (or not) using priors. Other techniques deny the existence of such priors.

Bayesian statistics are often put in opposition to “frequentist” statistics. This division is sufficiently pervasive that starting to type frequentist, Google autocompletes with vs Bayesian! There is also an xkcd cartoon. Fenton and Neil are Bayesians and put the Bayesian viewpoint. As a casual observer of this argument I get the impression that the Bayesian view is prevailing.

Bayesian networks are structures (graphs) in which we connect together multiple “nodes” of Bayes theorem. That’s to say we have multiple hypothesis with supporting (or not) evidence which lead to a grand “outcome” or hypothesis. Such a grand outcome might be the probability that someone is guilty in a criminal trial or that your home might flood. These outcomes are conditioned on multiple pieces of evidence, or events, that need to be combined. The neat thing about Bayesian Networks is that we can plug in what data we have to make estimates of things we don’t know – regardless of whether or not they are the “grand outcome”.

The “Naive Bayesian Classifier” is a special case of a Bayesian network where the nodes are all independent leading to a simple hub and spoke network.

Bayesian networks were relatively little used until computational developments in the 1980s meant that arbitrary networks could be “solved”. I was interested to see David Speigelhalter’s name appear in this context, arguably he is one of few publically recognisable mathematicians in the UK.

The second section, covering four chapters, goes into some practical detail on how to construct Bayesian networks. This includes recurring idioms in Bayesian Networks which they name the cause consequence idiom, measurement idiom, definitional/synthesis idiom and induction idioms. The idea is that when one addresses a problem, rather than starting with a blank sheet of paper, you select the appropriate idiom as a starting point. The typical problem is that the “node probability tables” can quickly become very large for a carelessly constructed Bayesian network, Risk assessment’s idioms help reduce this complexity.

Along with idioms this section also covers how ranked and continuous scales are handled, and in particular the use of dynamic discretization schemes for continuous scales. There is also a discussion of confidence levels which highlights the difference in thinking between Bayesians and frequentists, essentially the Bayesians are seeking the best answer given the circumstances whilst the frequentists are obsessing about the reliability of the evidence.

The final section of three chapters gives some concrete examples in specific fields: operational risk, reliability and the law. Of these I found the law examples the most pertinent. Bayes analysis fits very comfortably with legal cases, in theory, a legal case is about assigning a probability to the guilt or otherwise of a defendant by evaluating the strength (or probability that they are true) of evidence. In practice one gets the impression that faulty “commonsense” can prevail in emotive cases, and experts in Bayesian analysis are only brought in at appeal.

I don’t find this surprising, you only have to look at the amount of discussion arising from the Monty Hall problem to see that even “trivial” problems in probability can be remarkably hard to reason clearly about. I struggle with this topic myself despite substantial mathematical training.

Overall a readable book on a complex topic, if you want to know about Bayesian networks and want to apply them then definitely worth getting but not an entertaining book for a casual reader.

Book review: Spark GraphX in Action by Michael S. Malak and Robin East

malak-meapI wrote about Spark not so long ago when I reviewed Learning Spark, at the time I noted that Learning Spark did not cover the graph processing component of Spark, GraphX. Spark GraphX in Action by Michael S. Malak and Robin East fills that gap.

I read the book via Mannings Early Access Program (MEAP), they approached me and gave me access to the whole book for free, this meant I read it on my Kindle which I tend not to do these days for technical books because I still find paper a more flexible medium. Early Access means the book is still a little rough around the edges but it is complete.

The authors suggest that readers should be comfortable reading Scala code to enjoy the book. Scala is the language Spark is written in, and the best way to access GraphX. In fact access via Python (my favoured route) is impossible and using Java it sounds ugly. Scala is a functional language which runs on the Java virtual machine. It seems to be motivated by a desire to remove Java’s verbosity but perhaps goes a little too far. There is no `return` keyword for identifying the return value of a function. Its affectation is to overload the meaning of the underscore _. As it was I felt comfortable enough reading Scala code. I was interested to read that the two “variable” definitions are `val` and `var`, `val` is immutable and is preferred – var is mutable. This is probably a lesson for my Python programming – immutable “variables” can provide higher performance (and using immutable for things that you intend to be immutable aids clarity and debugging).

From the point of view of someone who has read about Spark and graph theory in the past the book is pitched at the right level, there is some introductory material about Spark and also about graph theory and then a set of examples. The book finishes with some material on inspecting running jobs in Spark using the Spark web interface. If you have never heard of Spark, then this book probably isn’t a good place to start.

The examples start with basic algorithms on measuring shortest paths across a graph, connectedness and the Page Rank algorithm on which Google was originally built. These are followed by simple implementations of some further algorithms including shortest paths with weighted edges (essential for route finding) and the travelling salesman problem. There then follows a chapter on some machine learning algorithms including recommendation engines, spam detection, and document clustering. Where appropriate the authors cite the original papers for algorithms including PageRank, Pregel (Google’s graph processing framework) and SVD++ (which was a key component of the winning entry for the Netflix recommendation prize) which is very welcome. The examples are outlines rather than full implementations of these sophisticated algorithms.

Finally, there is a chapter titled “The Missing Algorithms”, this is more a discussion of utility functions for GraphX in terms of import from other schemes such as RDF, operations such as merging two graphs or trimming away stray vertices.

The book gives the impression that GraphX is not ready for the big time yet, in a couple of places the authors said “this bit has only just started working”, and when they move on to talking about using SVD++ in GraphX they explain how the algorithm is only half implemented in GraphX. Full implementations are available in other languages.

It seemed to me on my original reading about Spark that the big benefit was that you could write machine learning systems in a familiar language which ran on a single machine in Spark, and then scale up effortlessly to a computing cluster, if required. Those benefits are not currently present in GraphX, you need to worry about coding in a foreign language and about the quality of the underlying implementation. It feels like the appropriate approach (for me) should be to prototype using Python/Neo4J, and likely discover that that is all that is needed. Only if you have a very large graph do you need to consider switching to a Spark based solution, and I’m not convinced GraphX is how you would do it even then.

The code samples are poorly formatted but you can fix this by downloading the source code and viewing it in the editor of your choice with nice syntax highlighting and consistent indenting – this makes things much clearer. The figures are clear enough but I find the Kindle approach of embedding thumbnail scale figures unhelpful – you need to double click them to make them readable. A reasonable solution would be to make figures full page by default, if that is possible.

This is one of the better “* in Action” books I’ve read, it’s not convinced me to use GraphX – quite the reverse – but that’s no bad thing and I’ve learnt a little about recommender algorithms and Scala.