Book review: Elasticsearch–The Definitive Guide by Clinton Gormley & Zachary Tong

elasticsearchBack to technology with this blog post and a review of Elasticsearch – The Definitive Guide by Clinton Gormley and Zachary Tong. The book is available for free online, and probably more up to date (here), that said Elasticsearch seems to be quite stable now. I have a dead tree copy because I’m old-fashioned.

Elasticsearch is a full-text search engine based on the Apache Lucene project. I was first made aware of it when I was working at ScraperWiki where we used it for a proof of concept system for analysing legalisation from many countries (I wasn’t involved hands-on with this work). Recently, I used it to make a little auto-completion web form for company names using the Companies House dataset. From download to implementing a solution which was x1000 times faster than a naive SQL querying system took less than a day – the default configuration and system is that good!

You can treat Elasticsearch like a SQL database to a fair degree, what it refers to indexes are what would be separate databases on a SQL server. Elasticsearch refers to document types instead of tables, and what would be rows in a SQL database are called “documents”. There are no joins as such in Elasticsearch but there are a number of workarounds such as parent-child relationships, nested objects or plain old denormalisation. I suspect one needs to be a bit cautious of treating Elasticsearch as a funny looking SQL database.

The preferred way to interact with Elasticsearch is using the HTTP API, this means that once installed you can prod away at your Elasticsearch database using curl from the commandline or the  Sense plugin for Google Chrome. The book is liberally scattered with examples written as HTTP requests, and online these can be launched from the browser (given a bit of configuration). To my mind the only downside of this is that queries are written in JSON which introduces a lot of extraneous brackets and quoting. For my experiments I moved quickly to using the Python interface which seems well-supported and complete (as do other language bindings).

Elasticsearch: The Definitive Guide is divided into 7 sections: Getting started, Search in Depth, Dealing with Human Language, Aggregations, Geolocation, Modelling your data, and finishes with Administration, Monitoring and Deployment.

The Getting Started section of the book covers everything you need to get you going but no single topic in any depth. The subsequent sections are largely about filling in that detail. The query language is completely different to SQL and queries come back with results ranked by a relevance score. I suspect this is where I’ll find myself working a lot in future, currently my queries give me a set of results which I filter in Python. I suspect I could write better queries which would return relevance scores which matched my application (and that I would trust). As it stands my queries always return *something* which may or may not be what I want.

I found the material regarding analyzers (which are applied to searchable fields and, symmetrically, search terms) very interesting and applicable to wider search problems where Elasticsearch is not necessarily the technology to be used. There is an overlap here with natural language processing in the sense that analyzers can include tokenizers, stemmers, and synonym lookups which are all part of the NLP domain. This is expanded on further in the “Dealing with human language” section.

The section on aggregations explains Elasticsearch’s “group by”-like functionality, and that on geolocation touches on spatial extension-like behaviour. Elasticsearch handles geohashes which are a relatively recent innovation in encoding spatial coordinates.

The book mentions very briefly the ELK stack which is Elasticsearch, Logstash and Kibana (all available from the elastic website). This is used to analyse log files, logstash funnels the log data into elasticsearch where it is visualised using Kibana. I tried out kibana briefly, its an easy to use visualising frontend.

Elasticsearch is a Big Data technology from the start which means it supports sharding, replication and distribution over nodes out of the box but it runs fine on a simple single node such as my laptop.

Elasticsearch is a pretty big book but the individual chapters are pretty short and to the point. As I’d expect from O’Reilly Elasticsearch is well-edited, and readable. I found it great for working out what all the parts of Elasticsearch are and now know what exists when it comes to solving live problems. The book is pretty good at telling you which things you can do, and which things you should do.

Book review: Roman Chester by David J.P. Mason

roman_chesterI recently realised that I live in a city with rather remarkable Roman roots. Having read Mary Beard’s book, SPQR, about the Roman’s in Rome, I turn now to Roman Chester: Fortress at the Edge of the World by David J.P. Mason.

The book starts with a chapter on the origins of the study of the Roman origins of Chester, and some background on Roman activities in Britain. The study of the Roman history of Chester begin back in the 18th century, with the hypocaust under the old Feathers Inn on Bridge Street a feature promoted by its owner. The Spud-u-like on the site now similarly boasts of its Roman remains. The original Roman east gate was still standing in the 18th century, and there exist several drawings of it from that period. The Victorians were keen excavators of the Roman archaeology, and formed the Chester Archaeological Society in 1849, and built the Grosvenor Museum in 1883.

A recurring theme of the book is the rather wilful destruction of substantial remains in the 1960s to build a couple of shopping centres. The Roman remains on the current Forum Shopping Centre site were destroyed after the rather fine Old Market Hall had been knocked down.

The core Roman activity in Chester was the fortress, established in 75AD under the reign of Vespasian. The fort is somewhat larger than other similar forts in England and the author suggests this was because it was, at one time, intended as the provincial governors base. Vespasian died shortly after the building of the Chester fortress started and the work paused. At the time of its Roman occupation Chester had a very fine harbour, the local sandstone was suitable for building, a brickworks was setup at Holt, further up the River Dee, and there was metal mining in North Wales and there was salt sourced from Northwich – all very important resource at the time.

Standing on the river Dee meant Chester could serve as a base for the further conquest of Britain and Ireland – although these plans did not come to fruition.  The evidence for this is some unusual buildings in the centre of the old fortress, and the rather more impressive nature of the original walls than the average Roman fort, and the discovery of rather classier than usual lead piping.

The book continues with a detailed examination of the various parts of the Roman fortress and the buildings it contained: the public baths, granaries and barracks. This is followed by a discussion of the surrounding canabae legionis, including the amphitheatre, the supporting Roman settlement and the more detached vicus. This includes the settlement at Heronbridge which was excavated relatively recently.

The third part of the book travels through time, looking at the periods c90-c120 in which the fortress was rebuilt, c120-c210 when the legion stationed at Chester was sent elsewhere to fight leaving the fortress to decline significantly. c210-c260 when the original impressive buildings at the heart of the fortress, not initially completed, were finally built. c260-c350 when the fortress fell and rose again. To finish in the period c350-c650 when Britain became detached from Rome, and fell into decline. The Roman fortress was robbed to provide building stone for the medieval walls and other structures including the cathedral.

Roman remains are visible throughout modern Chester. The north and east parts of the modern city walls follow the line of the walls of the Roman fortress. Some pillars are on display in front of the library, the hypocaust found under the Grosvenor shopping centre can now be found in the Roman Gardens, the amphitheatre is half exposed, parts of the walls particularly near Northgate and parallel to Frodsham street are contain Roman elements, the mysterious “quay wall” can be found down by the racecourse.

The book finishes with some comments on the general character of the investigations of Roman remains in Chester, and suggestions for further investigations and how to better exploit Chester’s Roman history. On the whole Chester has done moderately well in its treatment of the past, study started relatively early but much material has not been published. These days archaeology is mandated for new developments in the city but these tend to be rapid, keyhole operations with little coherent design.

Roman Chester is a rather a dry read, it is written much I would expect an article in a specialist archaeology journal to be written. The book could have done with a full double page map of modern, central Chester with the archaeological sites marked on it. As it was I was flicking between text descriptions and Google Maps to work out where everything was. Perhaps a project for the Christmas holiday!

If you are a resident of Chester then the book is absolutely fascinating.

Update

I’ve started making a map of Roman Chester on Google Maps.

The Logging module in Python

In the spirit of improving my software engineering practices I have been trying to make more use of the Python logging module. In common with many programmers my first instinct when debugging a programming problem is to use print statements (or their local equivalent) to provide an insight into what my program is up to. Obviously, I should be making use of any debugger provided but there is something reassuring about the immediacy and simplicity of print.

A useful evolution of the print statement in Python is the logging module which can be used as a simple print function but it can do so much more: you can configure loggers for different packages and modules whose behaviour can be controlled centrally; you can vary the verbosity of your logging messages. If you decide to switch to logging to a file rather than the terminal this can be achieved too, and you can even post your log messages to a website using HTTPhandler. Obviously logging is about much more than debugging.

I am writing this blog post because, as most of us have discovered, using logging is not quite as straightforward as we were led to believe. In particular you might find yourself in the situation where you feel you have set up your logging yet when you run your code nothing appears in your terminal window. Print doesn’t do this to you!

Loggers are arranged in a hierarchy. Loggers have handlers which are the things that cause a log to generate output to a device. If no log is specified then a default log called the root log is used. A logger has a name and the hierarchy is defined by the dots in the name, all the way “up” to the root logger. Any logger can have a handler attached to it, if no handler is attached then any log message is passed to the parent logger.

A log record has a message (the thing you would have printed) and a “level” which indicates the severity of the message these are specified by integers for which the logging module provides convenient labels. The levels in order of severity are logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL. A log handler will output a message if the level of the message is equal to or more than the level it has been set to. So a handler set to WARNING will show messages at the WARNING, ERROR and CRITICAL levels but not the INFO and DEBUG levels.

The simplest way to use the logging module is to import the library:

import logging

Then carry out some minimal configuration,

logging.basicConfig(level=logging.INFO)

and then put logging.info statements in our code, just as we would have done with print statements:

logging.info("This is a log message that takes a parameter = {}".format(a_parameter_value))

logging.debug, logging.warning, logging.error and logging.critical are used to publish log messages with different levels of severity. These are all convenience methods which remove the need to explicitly give the level as found in the logging.log function:

logging.log(logging.INFO, "This is a log message")

If we are writing a module, or other code that we anticipate others importing and running then we should create a logger using logging.getLogger(__name__) but leave configuring it to the caller. In this instance we use the name of the logger we have created instead of the module level “logging”. So to publish a message we would do:

logger = logging.getLogger(__name__)
logger.info("Hello")

In the module importing this library you would do something like:

import some_library
logging.basicConfig(level=logging.INFO)
# if you wanted to tweak the levels of another logger 
logger = logging.getLogger("some other logger")
logger.setLevel(logging.DEBUG)

basicConfig() configures the root logger which is where all messages end up in the absence of any other handler. The behaviour of logging.basicConfig() is downright obstructive at times. The core of the problem is that it can only be invoked once in a session, any future invocations are ignored. Worse than this it can be invoked implicitly. So if for example you do:

import logging
logging.warning("Hello")

You’ll see a message because secretly logging has effectively run logging.basicConfig(level=logging.WARNING) for you (or something similar). This means that if you were to then naively go ahead and run basicConfig yourself:

logging.basicConfig(level=logging.INFO)

You would see no message when you subsequently ran logging.info(“Hello”) because the “second” invocation of logging.basicConfig is ignored.

We can explicitly set the properties of the root logger by doing:

root_logger = logging.getLogger()
root_logger.setLevel(logging.INFO)

You can debug issues like this by checking the handlers to a logger. If you do:

import logging
lgr = logging.getLogger()
lgr.handlers

You get the empty list []. Issue a logging.warning() message and you see that a handler has been added to the root logger, lgr.handlers() returns something like [<logging.StreamHandler at 0x44327f0>].

If you want to see a list of all the loggers in the hierarchy then do:

logging.Logger.manager.loggerDict

So there you go, the logging module is great – you should use it instead of print. But beware of the odd behaviour of logging.basicConfig() which I’ve spent most of this post griping about. This is mainly so that I have all my knowledge of logging in one place rather than trying to remember which piece of code I pulled off a particular trick.

I used the logging documentation here, blog posts by Fang (here) and Praveen Gollakota (here) and tab completion in the ipython REPL in the preparation of this post.

Book review: The Invention of Science by David Wootton

inventionofscienceBack to the history of science with The Invention of Science by David Wootton which covers the period of the Scientific Revolution.

Wootton’s central theme is how language tracked the arrival of what we see as modern science in a period from about 1500 to 1700, and how this modern science was an important thing that has persisted to the present day. I believe he is a little controversial in denying the ubiquity of the Kuhnian paradigm shift and in his dismissal of what he refers to as the postmodern, “word-games” approach to the history of science which sees scientific statements as entirely equivalent to statements of beliefs.This approach is exemplified by Leviathan and the air-pump by Steven Shapin and Simon Schaffer which gets several mentions.

Wootton argues contrary to Kuhn that sometimes “paradigm shifts” happen almost silently. He also points out that Kuhn’s science is post-Scientific Revolution. One of the silent revolutions that he cites is the model of the world. “Flat-earth” in no way describes the pre-Colombus model of the world which originated from classical Greek scholarship. In this theoretical context the sphere is revered and the universe is built from the four elements: earth, wind, fire and water. The model for the “earth” is therefore a variety of uncomfortable attempts to superimpose spheres of water and earth. The Ancients got away with this because in Classical times the known world did not cover enough of the earth’s sphere to reveal embarrassing discrepancies between theory and actuality. With Colombus’s “discovery” of America and other expeditions crossing the equator and reaching The Far East over land these elemental sphere models were no longer viable. The new model of the earth which we hold to today entered quietly over the period 1475 to 1550. 

Colombus’s “discovery” also marks one of the key themes for the book, the development of new language to describe the fruits of scientific investigation. Prior to Colombus the idea of an original discovery was poorly expressed in Western European languages, writers had to specifically emphasise that they were the first to find something or somewhere out rather than a having a word to hand that expressed this. Prior to this time, Western European scholarship was very much focused on the “re-discovery” and re-interpretation of the lost wisdom of the Ancients. Words like “fact”,”laws” (of nature), “theories”, “hypotheses”, “experiment” and “evidence” also evolved over this period. This happened because the the world was changing, the printing press had arrived (which changed communication and collaboration entirely). Machines and instruments were being invented, and the application of maths was widening from early forms of banking to surveying and perspective drawing. These words morphed to their modern meanings across the European languages in a loosely coupled manner.

Experimentation is about more than just the crude mechanics of doing the experiment, it is about reporting that work to others so that they can replicate and extend the work. The invention of printing is important in this reporting process. This is why alchemy dies out sometime around the end of the 17th century. Although alchemy has experiments, clearly communicating your experiments to others is not part of the game. Alchemy is not a science, it is mysticism with scientific trappings.

As a sometime practising scientist all of these elements of discovery, facts, evidence, laws, hypotheses and theories are things whose definitions I take for granted. They are very clear to me now, and I know they are shared with other working scientists. What The Invention of Science highlights was that there was a time when these things were not true.

The central section of the book finishes with some thoughts on whether the Industrial Revolution required the Scientific Revolution on which to build. The answer is ultimately “yes”, although the time it takes is considerable. It flows from the work of Denis Papin on a steam digester in the late 17th century to Newcomen’s invention of the steam engine in the early 18th century. Steam engines don’t become ubiquitous until much later in the 18th century. The point here is that Papin’s work is very much in the spirit of a “academic” scientist (he had worked with Robert Boyle), whereas Newcomen sits in the world of industrial engineering and commerce.

I’ve not seen such an analysis of language in the study of the Scientific Revolution before, the author notes that much of this study is made possible by the internet. 

The editor clearly had a permissive view of footnotes, since almost every page has a footnote and more than a few pages are half footnote. The book also has endnotes, and some “afterthoughts”. Initially I found this a bit irritating but some of the footnotes are quite interesting. For example, the Matses tribe in the Amazon include provenance in their verb forms, using the incorrect verb form is seen as a lie. In my day to day work with data this “provenance required” approach is very appealing.

The Invention of Science is very rich, and thought provoking and presents a thesis which I had not seen presented before, although the “facts” of the Scientific Revolution are well known. I’m off to read Leviathan and the air-pump partly on the recommendation of the author of this book.

Book review: Beautiful JavaScript edited by Anton Kovalyov

beautiful_javascriptI have approached JavaScript in a crabwise fashion. A few years ago I managed to put together some visualisations by striking randomly at the keyboard. I then read Douglas Crockford’s JavaScript: The Good Parts, and bought JavaScript Bible by Danny Goodman, Michael Morrison, Paul Novitski, Tia Gustaff Rayl which I used as a monitor stand for a couple of years.

Working at ScraperWiki (now The Sensible Code Company), I wrote some more rational code whilst pair-programming with a colleague. More recently I have been building demonstration and analytical web applications using JavaScript which access databases and display layered maps, some of the effects I achieve are even intentional! The importance of JavaScript for me is that nowadays when I come to make a GUI for my analysis (usually in Python) then the natural thing to do is build a web interface using JavaScript/CSS/HTML because the “native” GUI toolkits for Python are looking dated and unloved. As my colleague pointed out, nowadays every decent web browser comes with a pretty complete IDE for JavaScript which allows you to run and inspect your code, profile network activity, add breakpoints and emulate a range of devices both in display and network bandwidth capabilities. Furthermore there are a large number of libraries to help with almost any task. I’ve used d3 for visualisations, jQuery for just about everything, OpenLayers for maps, and three.js for high performance 3D rendering.

This brings me to Beautiful JavaScript: Leading Programmers Explain How They Think edited by Anton Kovalyov. The book is an edited volume featuring chapters from 15 experienced JavaScript programmers. The style varies dramatically, as you might expect, but chapters are well-edited and readable. Overall the book is only 150 pages. My experience is that learning a programming language is much more than the brute detail of the language syntax, reading this book is my way of finding out what I should do, rather than what it is possible to do.

It’s striking that several of the authors write about introducing class inheritance into JavaScript. To me this highlights the flexibility of programming languages, and possibly the inflexibility of programmers. Despite many years of abstract learning about object-oriented programming I persistently fail to do it, even if the features are available in the language I am using. I blame some of this on a long association with FORTRAN and then Matlab which only introduced object-oriented features later in their lives. “Proper” developers, it seems, are taught to use class inheritance and when the language they select does not offer it natively they improvise to re-introduce it. Since Beautiful JavaScript was published JavaScript now has a class keyword but this simply provides a prettier way of accessing the prototype inheritance mechanism of JavaScript.

Other chapters in Beautiful JavaScript are about coding style. For teams, consistent and unflashy style are more important than using a language to its limits. Some chapters demonstrate just what those limits can be, for example, Graeme Roberts chapter “JavaScript is Cutieful” introduces us to some very obscure code. Other chapters offer practical implementations of a maths parser, a domain specific language parser and some notes on proper error handling.

JavaScript is an odd sort of a language, at first it seemed almost like a toy language designed to do minor tasks on web pages. Twenty years after its birth it is everywhere and multiple billion dollar businesses are built on top of it. If you like you can now code in JavaScript on your server, as well as in the client web browser using node.js. You can write in CoffeeScript which compiles to JavaScript (I’ve never seen the point of this). Chapters by Jonathan Barronville on node.js and Rebecca Murphey on Backbone highlight this growing maturity.

Anton Kovalyov writes on how JavaScript can be used as a functional language. Its illuminating to see this discussion alongside those looking at class inheritance-like behaviour. It highlights the risks of treating JavaScript as a language with class inheritance or being a “true” functional language. The risk being that although JavaScript might look like these things ultimately it isn’t and this may cause problems. For example, functional languages rely on data structures being immutable, they aren’t in JavaScript so although you might decide in your functional programming mode that you will not modify the input arguments to a function JavaScript will not stop from you from doing so.

The authors are listed with brief biographies in the dead zone beyond the index which is a pity because the biographies could very usefully been presented at the beginning of each chapter. They are: Anton Kovalyov , Jonathan Barronville, Sara Chipps, Angus Croll, Marijn Haverbeke, Ariya Hidayat, Daryl Koopersmith, Rebecca Murphey, Danial Pupius, Graeme Roberts, Jenn Schiffer, Jacob Thorton, Ben Vinegar, Rick Waldron, Nicholas Zakas. They have backgrounds with Twitter, Medium, Yahoo and diverse other places.

Beautiful JavaScript is a short, readable book which gives the relatively new JavaScript programmer something to think about.