Tag: data science

Book review: Working effectively with legacy code by Michael C. Feathers

legacy_codeWorking effectively with legacy code by Michael C. Feathers is one of the programmer’s classic texts. I’d seen it lying around the office at ScraperWiki but hadn’t picked it up since I didn’t think I was working with legacy code. I returned to read it having found it at the top of the list of recommended programming books from Stackoverflow at dev-books. Reading the description I learnt that it’s more a book about testing than about legacy code. Feathers defines legacy code simply as code without tests, he is of the Agile school of software development for whom tests are central.

With this in mind I thought it would be a useful read for me to improve my own code with the application of better tests and perhaps incidentally picking up some object-oriented style, in which I am currently lacking.

Following the theme of my previous blog post on women authors I note that there are two women authors in the 30 books on the dev-books list. It’s interesting that a number of books in the style of Working Effectively explicitly reference women as project managers, or testers in the text, i.e part of the team – I take this as a recognition that there exists a problem which needs to be addressed and this is pretty much the least you can do. However, beyond the family, friends and publishing team the acknowledgements mention one women in a lengthy list.

The book starts with a general overview of the techniques it will introduce, including the tools used to address them. These come down to testing frameworks and the refactoring tools found in many IDEs. The examples in the book are typically written in C++ or Java. I particularly liked the introduction of the ideas of the “seam”, a place where behaviour can be changed without editing the code and the “enabling point” – the place where a change can be made at that seam. A seam may be a class that can be replaced by another one, or a value altered. In desperate cases (in C) the preprocessor can be used to invoke test-time changes in the executed code.

There are then a set of chapters that answer questions that a legacy code-ridden developer might have such as:

  • I can’t get this class into a test harness
  • How do I know that I’m not breaking anything?
  • I need to make a change. What methods should I test?

This makes the book easy to navigate, if not a bit inelegant. It seems to me that the book addresses two problems in getting suitably sized pieces of code into a test harness. One of these is breaking the code into suitable sized pieces by, for example, extracting methods. The second is gaining independence of the pieces of code such that they can be tested without building a huge infrastructure up to support them.

Although I’ve not done any serious programming in Java or C++ I felt I generally understood the examples presented. My favoured language is Python, and the problems I tackle tend to be more amenable to a functional style of programming. Despite this I think many of the methods described are highly relevant – particularly those describing how to break down monster functions. The book is highly pragmatic, it accepts that the world is not full of applications in which beautiful structure diagrams are replicated by beautiful code.

There are differences between these compiled object-oriented languages and Python though. C#, Java, and C++ all have a collection of keywords (like public, private, protected, static and final) which control who can see what methods exist on a class and whether they can be over-ridden or replaced. These features present challenges for bringing legacy code under test. Python, on the other hand, has a “gentleman’s agreement” that method names starting with an underscore are private, but that’s it, and there are no mechanisms to prevent you using these “private” functions! Similarly, pretty much any method in Python can be over-ridden by monkey-patching. That’s to say if you don’t like a function in an imported library you can simply overwrite it with your own version after you’ve imported the library. This is not necessarily a good thing. A second difference is that Python comes with a unit testing framework and a mocking library rather than them being functionality which is third-party added. Although to be fair, the mocking library in Python was originally third party.

I’ve often felt I should programme in a more object-oriented style but this book has made me reconsider. It’s quite clear that spaghetti code can be written in an object oriented language as well as any other. And I suspect the data processing for which I am normally coding fits very well with a functional style of coding. The ideas of single responsibility functions, and testing still fit well with more functional programming styles.

Working effectively is readable and pragmatic. I suspect the developer’s dirty secret is that actually we wrote the legacy code that we’re now trying to fix.

Book review: Weapons of Math Destruction by Cathy O’Neil

weapons_of_math_destructionObviously for any UK anglophone the title of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil is going to be a bit grating. The book is an account of how algorithms can ruin people’s lives. To a degree the “Big Data” in the subtitle is incidental.

Cathy O’Neil started her career as a mathematician before worked for the Shaw Hedge Fund as a quant before moving to Instant Media to work as a data scientist. It’s nice to know that I’m not the only person to have become a data scientist largely by writing “data scientist” on their CV! Nowadays she is an activist in the Occupy movement.

The book is the result of O’Neil’s revelation that algorithms were often used destructively, and are responsible for gross injustices. Algorithms in this case are models that determine how companies, and sometimes government, deal with their employees, customers and citizens; whether they are offered loans, adverts of a particular sort, employment, termination or a lengthy prison sentence.

The book starts with her experience at Shaw where she saw the subprime mortgage crisis from quite close up. In a nutshell: the subprime mortgage crisis happened because it was in the interests of most of the players in the industry for the stated risk of these mortgages to be minimised. The ratings agencies were paid by the aggregators of these mortgages to rate their risk, and the purchasers of these risk ratings had an interest in those ratings to be low – the ratings agencies duly obliged.

The book goes on to cover a number of other “Weapons of Math Destruction”, including models for recruitment, insurance, credit rating, scheduling (for work), politics and policing. So, for example, there are the predictive policing algorithms which will direct the police for particular parts of town in an effort to reduce serious crime but where the police will consequently record more anti-social behaviour which will lead the algorithm to send them there again because it turns out that serious crime is quite rare but anti-social behaviour isn’t (so there’s more data to draw on). And the police in a number of countries are following the “zero-tolerance” model which says if you address minor misdemeanours then more serious crimes are fixed automatically. The problem in the US with this approach is that the police are sent to black neighbourhoods repeatedly (rather than, say, college campuses) and the model is self-reinforcing.

O’Neil identifies several systematic problems which are typically of Weapons of Math Destruction. These are the use of proxies rather than “real outcomes”, the lack of feedback from outcomes to the model, the scale on which the model impacts people, the lack of fairness built into the model, the opacity of the models and the damage the models can do. The damage is extensive, these WMDs can lead to you being arrested, incarcerated for lengthy periods, denied a job, denied medical insurance, and offered loans at most extortionate rates to complete courses at rather low rate universities.

The book is focused almost entirely on the US, in fact the only mention of a place outside the US is of policing in the “city of Kent”. However, O’Neil does seem to rate the data and privacy legislation in Europe – where consumers should be told of the purposes to which data will be put when they supply it. Even in the States the law provides some limits on certain types of model (such as credit scoring) but these laws have not kept pace with new developments, nor are they necessarily easy to use. For example, if your credit score is wrong fixing it although legally mandated is not quick and easy.

Perhaps her most telling comment is that computers don’t understand fairness, and certainly don’t exhibit fairness if they are not asked to optimise for it. Which does lead to the question “How do you implement fairness?”. In some cases it is obvious: you shouldn’t make use of algorithms which explicitly take into account gender, race or disability. But it’s easy to inadvertently bring in these parameters by, for example, postcode being correlated with race. Or part-time working being correlated with gender or disability.

As a middle aged, middle class white man with a reasonably well-paid job, living in a nice part of town I am least likely to find myself on the wrong end of an algorithm and ironically the most likely to be writing such algorithms.

I found the book very thought-provoking, it will certainly lead me to ask me whether the algorithms and data that I am generating are fair and what the cost of any unfairness is.

Book review: Elasticsearch–The Definitive Guide by Clinton Gormley & Zachary Tong

elasticsearchBack to technology with this blog post and a review of Elasticsearch – The Definitive Guide by Clinton Gormley and Zachary Tong. The book is available for free online, and probably more up to date (here), that said Elasticsearch seems to be quite stable now. I have a dead tree copy because I’m old-fashioned.

Elasticsearch is a full-text search engine based on the Apache Lucene project. I was first made aware of it when I was working at ScraperWiki where we used it for a proof of concept system for analysing legalisation from many countries (I wasn’t involved hands-on with this work). Recently, I used it to make a little auto-completion web form for company names using the Companies House dataset. From download to implementing a solution which was x1000 times faster than a naive SQL querying system took less than a day – the default configuration and system is that good!

You can treat Elasticsearch like a SQL database to a fair degree, what it refers to indexes are what would be separate databases on a SQL server. Elasticsearch refers to document types instead of tables, and what would be rows in a SQL database are called “documents”. There are no joins as such in Elasticsearch but there are a number of workarounds such as parent-child relationships, nested objects or plain old denormalisation. I suspect one needs to be a bit cautious of treating Elasticsearch as a funny looking SQL database.

The preferred way to interact with Elasticsearch is using the HTTP API, this means that once installed you can prod away at your Elasticsearch database using curl from the commandline or the  Sense plugin for Google Chrome. The book is liberally scattered with examples written as HTTP requests, and online these can be launched from the browser (given a bit of configuration). To my mind the only downside of this is that queries are written in JSON which introduces a lot of extraneous brackets and quoting. For my experiments I moved quickly to using the Python interface which seems well-supported and complete (as do other language bindings).

Elasticsearch: The Definitive Guide is divided into 7 sections: Getting started, Search in Depth, Dealing with Human Language, Aggregations, Geolocation, Modelling your data, and finishes with Administration, Monitoring and Deployment.

The Getting Started section of the book covers everything you need to get you going but no single topic in any depth. The subsequent sections are largely about filling in that detail. The query language is completely different to SQL and queries come back with results ranked by a relevance score. I suspect this is where I’ll find myself working a lot in future, currently my queries give me a set of results which I filter in Python. I suspect I could write better queries which would return relevance scores which matched my application (and that I would trust). As it stands my queries always return *something* which may or may not be what I want.

I found the material regarding analyzers (which are applied to searchable fields and, symmetrically, search terms) very interesting and applicable to wider search problems where Elasticsearch is not necessarily the technology to be used. There is an overlap here with natural language processing in the sense that analyzers can include tokenizers, stemmers, and synonym lookups which are all part of the NLP domain. This is expanded on further in the “Dealing with human language” section.

The section on aggregations explains Elasticsearch’s “group by”-like functionality, and that on geolocation touches on spatial extension-like behaviour. Elasticsearch handles geohashes which are a relatively recent innovation in encoding spatial coordinates.

The book mentions very briefly the ELK stack which is Elasticsearch, Logstash and Kibana (all available from the elastic website). This is used to analyse log files, logstash funnels the log data into elasticsearch where it is visualised using Kibana. I tried out kibana briefly, its an easy to use visualising frontend.

Elasticsearch is a Big Data technology from the start which means it supports sharding, replication and distribution over nodes out of the box but it runs fine on a simple single node such as my laptop.

Elasticsearch is a pretty big book but the individual chapters are pretty short and to the point. As I’d expect from O’Reilly Elasticsearch is well-edited, and readable. I found it great for working out what all the parts of Elasticsearch are and now know what exists when it comes to solving live problems. The book is pretty good at telling you which things you can do, and which things you should do.

The Logging module in Python

In the spirit of improving my software engineering practices I have been trying to make more use of the Python logging module. In common with many programmers my first instinct when debugging a programming problem is to use print statements (or their local equivalent) to provide an insight into what my program is up to. Obviously, I should be making use of any debugger provided but there is something reassuring about the immediacy and simplicity of print.

A useful evolution of the print statement in Python is the logging module which can be used as a simple print function but it can do so much more: you can configure loggers for different packages and modules whose behaviour can be controlled centrally; you can vary the verbosity of your logging messages. If you decide to switch to logging to a file rather than the terminal this can be achieved too, and you can even post your log messages to a website using HTTPhandler. Obviously logging is about much more than debugging.

I am writing this blog post because, as most of us have discovered, using logging is not quite as straightforward as we were led to believe. In particular you might find yourself in the situation where you feel you have set up your logging yet when you run your code nothing appears in your terminal window. Print doesn’t do this to you!

Loggers are arranged in a hierarchy. Loggers have handlers which are the things that cause a log to generate output to a device. If no log is specified then a default log called the root log is used. A logger has a name and the hierarchy is defined by the dots in the name, all the way “up” to the root logger. Any logger can have a handler attached to it, if no handler is attached then any log message is passed to the parent logger.

A log record has a message (the thing you would have printed) and a “level” which indicates the severity of the message these are specified by integers for which the logging module provides convenient labels. The levels in order of severity are logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL. A log handler will output a message if the level of the message is equal to or more than the level it has been set to. So a handler set to WARNING will show messages at the WARNING, ERROR and CRITICAL levels but not the INFO and DEBUG levels.

The simplest way to use the logging module is to import the library:

import logging

Then carry out some minimal configuration,

logging.basicConfig(level=logging.INFO)

and then put logging.info statements in our code, just as we would have done with print statements:

logging.info("This is a log message that takes a parameter = {}".format(a_parameter_value))

logging.debug, logging.warning, logging.error and logging.critical are used to publish log messages with different levels of severity. These are all convenience methods which remove the need to explicitly give the level as found in the logging.log function:

logging.log(logging.INFO, "This is a log message")

If we are writing a module, or other code that we anticipate others importing and running then we should create a logger using logging.getLogger(__name__) but leave configuring it to the caller. In this instance we use the name of the logger we have created instead of the module level “logging”. So to publish a message we would do:

logger = logging.getLogger(__name__)
logger.info("Hello")

In the module importing this library you would do something like:

import some_library
logging.basicConfig(level=logging.INFO)
# if you wanted to tweak the levels of another logger 
logger = logging.getLogger("some other logger")
logger.setLevel(logging.DEBUG)

basicConfig() configures the root logger which is where all messages end up in the absence of any other handler. The behaviour of logging.basicConfig() is downright obstructive at times. The core of the problem is that it can only be invoked once in a session, any future invocations are ignored. Worse than this it can be invoked implicitly. So if for example you do:

import logging
logging.warning("Hello")

You’ll see a message because secretly logging has effectively run logging.basicConfig(level=logging.WARNING) for you (or something similar). This means that if you were to then naively go ahead and run basicConfig yourself:

logging.basicConfig(level=logging.INFO)

You would see no message when you subsequently ran logging.info(“Hello”) because the “second” invocation of logging.basicConfig is ignored.

We can explicitly set the properties of the root logger by doing:

root_logger = logging.getLogger()
root_logger.setLevel(logging.INFO)

You can debug issues like this by checking the handlers to a logger. If you do:

import logging
lgr = logging.getLogger()
lgr.handlers

You get the empty list []. Issue a logging.warning() message and you see that a handler has been added to the root logger, lgr.handlers() returns something like [<logging.StreamHandler at 0x44327f0>].

If you want to see a list of all the loggers in the hierarchy then do:

logging.Logger.manager.loggerDict

So there you go, the logging module is great – you should use it instead of print. But beware of the odd behaviour of logging.basicConfig() which I’ve spent most of this post griping about. This is mainly so that I have all my knowledge of logging in one place rather than trying to remember which piece of code I pulled off a particular trick.

I used the logging documentation here, blog posts by Fang (here) and Praveen Gollakota (here) and tab completion in the ipython REPL in the preparation of this post.

Book review: Beautiful JavaScript edited by Anton Kovalyov

beautiful_javascriptI have approached JavaScript in a crabwise fashion. A few years ago I managed to put together some visualisations by striking randomly at the keyboard. I then read Douglas Crockford’s JavaScript: The Good Parts, and bought JavaScript Bible by Danny Goodman, Michael Morrison, Paul Novitski, Tia Gustaff Rayl which I used as a monitor stand for a couple of years.

Working at ScraperWiki (now The Sensible Code Company), I wrote some more rational code whilst pair-programming with a colleague. More recently I have been building demonstration and analytical web applications using JavaScript which access databases and display layered maps, some of the effects I achieve are even intentional! The importance of JavaScript for me is that nowadays when I come to make a GUI for my analysis (usually in Python) then the natural thing to do is build a web interface using JavaScript/CSS/HTML because the “native” GUI toolkits for Python are looking dated and unloved. As my colleague pointed out, nowadays every decent web browser comes with a pretty complete IDE for JavaScript which allows you to run and inspect your code, profile network activity, add breakpoints and emulate a range of devices both in display and network bandwidth capabilities. Furthermore there are a large number of libraries to help with almost any task. I’ve used d3 for visualisations, jQuery for just about everything, OpenLayers for maps, and three.js for high performance 3D rendering.

This brings me to Beautiful JavaScript: Leading Programmers Explain How They Think edited by Anton Kovalyov. The book is an edited volume featuring chapters from 15 experienced JavaScript programmers. The style varies dramatically, as you might expect, but chapters are well-edited and readable. Overall the book is only 150 pages. My experience is that learning a programming language is much more than the brute detail of the language syntax, reading this book is my way of finding out what I should do, rather than what it is possible to do.

It’s striking that several of the authors write about introducing class inheritance into JavaScript. To me this highlights the flexibility of programming languages, and possibly the inflexibility of programmers. Despite many years of abstract learning about object-oriented programming I persistently fail to do it, even if the features are available in the language I am using. I blame some of this on a long association with FORTRAN and then Matlab which only introduced object-oriented features later in their lives. “Proper” developers, it seems, are taught to use class inheritance and when the language they select does not offer it natively they improvise to re-introduce it. Since Beautiful JavaScript was published JavaScript now has a class keyword but this simply provides a prettier way of accessing the prototype inheritance mechanism of JavaScript.

Other chapters in Beautiful JavaScript are about coding style. For teams, consistent and unflashy style are more important than using a language to its limits. Some chapters demonstrate just what those limits can be, for example, Graeme Roberts chapter “JavaScript is Cutieful” introduces us to some very obscure code. Other chapters offer practical implementations of a maths parser, a domain specific language parser and some notes on proper error handling.

JavaScript is an odd sort of a language, at first it seemed almost like a toy language designed to do minor tasks on web pages. Twenty years after its birth it is everywhere and multiple billion dollar businesses are built on top of it. If you like you can now code in JavaScript on your server, as well as in the client web browser using node.js. You can write in CoffeeScript which compiles to JavaScript (I’ve never seen the point of this). Chapters by Jonathan Barronville on node.js and Rebecca Murphey on Backbone highlight this growing maturity.

Anton Kovalyov writes on how JavaScript can be used as a functional language. Its illuminating to see this discussion alongside those looking at class inheritance-like behaviour. It highlights the risks of treating JavaScript as a language with class inheritance or being a “true” functional language. The risk being that although JavaScript might look like these things ultimately it isn’t and this may cause problems. For example, functional languages rely on data structures being immutable, they aren’t in JavaScript so although you might decide in your functional programming mode that you will not modify the input arguments to a function JavaScript will not stop from you from doing so.

The authors are listed with brief biographies in the dead zone beyond the index which is a pity because the biographies could very usefully been presented at the beginning of each chapter. They are: Anton Kovalyov , Jonathan Barronville, Sara Chipps, Angus Croll, Marijn Haverbeke, Ariya Hidayat, Daryl Koopersmith, Rebecca Murphey, Danial Pupius, Graeme Roberts, Jenn Schiffer, Jacob Thorton, Ben Vinegar, Rick Waldron, Nicholas Zakas. They have backgrounds with Twitter, Medium, Yahoo and diverse other places.

Beautiful JavaScript is a short, readable book which gives the relatively new JavaScript programmer something to think about.