Author's posts
Nov 18 2016
The Logging module in Python
In the spirit of improving my software engineering practices I have been trying to make more use of the Python logging module. In common with many programmers my first instinct when debugging a programming problem is to use print statements (or their local equivalent) to provide an insight into what my program is up to. Obviously, I should be making use of any debugger provided but there is something reassuring about the immediacy and simplicity of print.
A useful evolution of the print statement in Python is the logging module which can be used as a simple print function but it can do so much more: you can configure loggers for different packages and modules whose behaviour can be controlled centrally; you can vary the verbosity of your logging messages. If you decide to switch to logging to a file rather than the terminal this can be achieved too, and you can even post your log messages to a website using HTTPhandler. Obviously logging is about much more than debugging.
I am writing this blog post because, as most of us have discovered, using logging is not quite as straightforward as we were led to believe. In particular you might find yourself in the situation where you feel you have set up your logging yet when you run your code nothing appears in your terminal window. Print doesn’t do this to you!
Loggers are arranged in a hierarchy. Loggers have handlers which are the things that cause a log to generate output to a device. If no log is specified then a default log called the root log is used. A logger has a name and the hierarchy is defined by the dots in the name, all the way “up” to the root logger. Any logger can have a handler attached to it, if no handler is attached then any log message is passed to the parent logger.
A log record has a message (the thing you would have printed) and a “level” which indicates the severity of the message these are specified by integers for which the logging module provides convenient labels. The levels in order of severity are logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL. A log handler will output a message if the level of the message is equal to or more than the level it has been set to. So a handler set to WARNING will show messages at the WARNING, ERROR and CRITICAL levels but not the INFO and DEBUG levels.
The simplest way to use the logging module is to import the library:
import logging
Then carry out some minimal configuration,
logging.basicConfig(level=logging.INFO)
and then put logging.info statements in our code, just as we would have done with print statements:
logging.info("This is a log message that takes a parameter = {}".format(a_parameter_value))
logging.debug, logging.warning, logging.error and logging.critical are used to publish log messages with different levels of severity. These are all convenience methods which remove the need to explicitly give the level as found in the logging.log function:
logging.log(logging.INFO, "This is a log message")
If we are writing a module, or other code that we anticipate others importing and running then we should create a logger using logging.getLogger(__name__) but leave configuring it to the caller. In this instance we use the name of the logger we have created instead of the module level “logging”. So to publish a message we would do:
logger = logging.getLogger(__name__) logger.info("Hello")
In the module importing this library you would do something like:
import some_library logging.basicConfig(level=logging.INFO) # if you wanted to tweak the levels of another logger logger = logging.getLogger("some other logger") logger.setLevel(logging.DEBUG)
basicConfig() configures the root logger which is where all messages end up in the absence of any other handler. The behaviour of logging.basicConfig() is downright obstructive at times. The core of the problem is that it can only be invoked once in a session, any future invocations are ignored. Worse than this it can be invoked implicitly. So if for example you do:
import logging logging.warning("Hello")
You’ll see a message because secretly logging has effectively run logging.basicConfig(level=logging.WARNING) for you (or something similar). This means that if you were to then naively go ahead and run basicConfig yourself:
logging.basicConfig(level=logging.INFO)
You would see no message when you subsequently ran logging.info(“Hello”) because the “second” invocation of logging.basicConfig is ignored.
We can explicitly set the properties of the root logger by doing:
root_logger = logging.getLogger() root_logger.setLevel(logging.INFO)
You can debug issues like this by checking the handlers to a logger. If you do:
import logging lgr = logging.getLogger() lgr.handlers
You get the empty list []. Issue a logging.warning() message and you see that a handler has been added to the root logger, lgr.handlers() returns something like [<logging.StreamHandler at 0x44327f0>].
If you want to see a list of all the loggers in the hierarchy then do:
logging.Logger.manager.loggerDict
So there you go, the logging module is great – you should use it instead of print. But beware of the odd behaviour of logging.basicConfig() which I’ve spent most of this post griping about. This is mainly so that I have all my knowledge of logging in one place rather than trying to remember which piece of code I pulled off a particular trick.
I used the logging documentation here, blog posts by Fang (here) and Praveen Gollakota (here) and tab completion in the ipython REPL in the preparation of this post.
Nov 17 2016
Book review: The Invention of Science by David Wootton
Back to the history of science with The Invention of Science by David Wootton which covers the period of the Scientific Revolution.
Wootton’s central theme is how language tracked the arrival of what we see as modern science in a period from about 1500 to 1700, and how this modern science was an important thing that has persisted to the present day. I believe he is a little controversial in denying the ubiquity of the Kuhnian paradigm shift and in his dismissal of what he refers to as the postmodern, “word-games” approach to the history of science which sees scientific statements as entirely equivalent to statements of beliefs.This approach is exemplified by Leviathan and the air-pump by Steven Shapin and Simon Schaffer which gets several mentions.
Wootton argues contrary to Kuhn that sometimes “paradigm shifts” happen almost silently. He also points out that Kuhn’s science is post-Scientific Revolution. One of the silent revolutions that he cites is the model of the world. “Flat-earth” in no way describes the pre-Colombus model of the world which originated from classical Greek scholarship. In this theoretical context the sphere is revered and the universe is built from the four elements: earth, wind, fire and water. The model for the “earth” is therefore a variety of uncomfortable attempts to superimpose spheres of water and earth. The Ancients got away with this because in Classical times the known world did not cover enough of the earth’s sphere to reveal embarrassing discrepancies between theory and actuality. With Colombus’s “discovery” of America and other expeditions crossing the equator and reaching The Far East over land these elemental sphere models were no longer viable. The new model of the earth which we hold to today entered quietly over the period 1475 to 1550.
Colombus’s “discovery” also marks one of the key themes for the book, the development of new language to describe the fruits of scientific investigation. Prior to Colombus the idea of an original discovery was poorly expressed in Western European languages, writers had to specifically emphasise that they were the first to find something or somewhere out rather than a having a word to hand that expressed this. Prior to this time, Western European scholarship was very much focused on the “re-discovery” and re-interpretation of the lost wisdom of the Ancients. Words like “fact”,”laws” (of nature), “theories”, “hypotheses”, “experiment” and “evidence” also evolved over this period. This happened because the the world was changing, the printing press had arrived (which changed communication and collaboration entirely). Machines and instruments were being invented, and the application of maths was widening from early forms of banking to surveying and perspective drawing. These words morphed to their modern meanings across the European languages in a loosely coupled manner.
Experimentation is about more than just the crude mechanics of doing the experiment, it is about reporting that work to others so that they can replicate and extend the work. The invention of printing is important in this reporting process. This is why alchemy dies out sometime around the end of the 17th century. Although alchemy has experiments, clearly communicating your experiments to others is not part of the game. Alchemy is not a science, it is mysticism with scientific trappings.
As a sometime practising scientist all of these elements of discovery, facts, evidence, laws, hypotheses and theories are things whose definitions I take for granted. They are very clear to me now, and I know they are shared with other working scientists. What The Invention of Science highlights was that there was a time when these things were not true.
The central section of the book finishes with some thoughts on whether the Industrial Revolution required the Scientific Revolution on which to build. The answer is ultimately “yes”, although the time it takes is considerable. It flows from the work of Denis Papin on a steam digester in the late 17th century to Newcomen’s invention of the steam engine in the early 18th century. Steam engines don’t become ubiquitous until much later in the 18th century. The point here is that Papin’s work is very much in the spirit of a “academic” scientist (he had worked with Robert Boyle), whereas Newcomen sits in the world of industrial engineering and commerce.
I’ve not seen such an analysis of language in the study of the Scientific Revolution before, the author notes that much of this study is made possible by the internet.
The editor clearly had a permissive view of footnotes, since almost every page has a footnote and more than a few pages are half footnote. The book also has endnotes, and some “afterthoughts”. Initially I found this a bit irritating but some of the footnotes are quite interesting. For example, the Matses tribe in the Amazon include provenance in their verb forms, using the incorrect verb form is seen as a lie. In my day to day work with data this “provenance required” approach is very appealing.
The Invention of Science is very rich, and thought provoking and presents a thesis which I had not seen presented before, although the “facts” of the Scientific Revolution are well known. I’m off to read Leviathan and the air-pump partly on the recommendation of the author of this book.
Sep 20 2016
Book review: Beautiful JavaScript edited by Anton Kovalyov
I have approached JavaScript in a crabwise fashion. A few years ago I managed to put together some visualisations by striking randomly at the keyboard. I then read Douglas Crockford’s JavaScript: The Good Parts, and bought JavaScript Bible by Danny Goodman, Michael Morrison, Paul Novitski, Tia Gustaff Rayl which I used as a monitor stand for a couple of years.
Working at ScraperWiki (now The Sensible Code Company), I wrote some more rational code whilst pair-programming with a colleague. More recently I have been building demonstration and analytical web applications using JavaScript which access databases and display layered maps, some of the effects I achieve are even intentional! The importance of JavaScript for me is that nowadays when I come to make a GUI for my analysis (usually in Python) then the natural thing to do is build a web interface using JavaScript/CSS/HTML because the “native” GUI toolkits for Python are looking dated and unloved. As my colleague pointed out, nowadays every decent web browser comes with a pretty complete IDE for JavaScript which allows you to run and inspect your code, profile network activity, add breakpoints and emulate a range of devices both in display and network bandwidth capabilities. Furthermore there are a large number of libraries to help with almost any task. I’ve used d3 for visualisations, jQuery for just about everything, OpenLayers for maps, and three.js for high performance 3D rendering.
This brings me to Beautiful JavaScript: Leading Programmers Explain How They Think edited by Anton Kovalyov. The book is an edited volume featuring chapters from 15 experienced JavaScript programmers. The style varies dramatically, as you might expect, but chapters are well-edited and readable. Overall the book is only 150 pages. My experience is that learning a programming language is much more than the brute detail of the language syntax, reading this book is my way of finding out what I should do, rather than what it is possible to do.
It’s striking that several of the authors write about introducing class inheritance into JavaScript. To me this highlights the flexibility of programming languages, and possibly the inflexibility of programmers. Despite many years of abstract learning about object-oriented programming I persistently fail to do it, even if the features are available in the language I am using. I blame some of this on a long association with FORTRAN and then Matlab which only introduced object-oriented features later in their lives. “Proper” developers, it seems, are taught to use class inheritance and when the language they select does not offer it natively they improvise to re-introduce it. Since Beautiful JavaScript was published JavaScript now has a class keyword but this simply provides a prettier way of accessing the prototype inheritance mechanism of JavaScript.
Other chapters in Beautiful JavaScript are about coding style. For teams, consistent and unflashy style are more important than using a language to its limits. Some chapters demonstrate just what those limits can be, for example, Graeme Roberts chapter “JavaScript is Cutieful” introduces us to some very obscure code. Other chapters offer practical implementations of a maths parser, a domain specific language parser and some notes on proper error handling.
JavaScript is an odd sort of a language, at first it seemed almost like a toy language designed to do minor tasks on web pages. Twenty years after its birth it is everywhere and multiple billion dollar businesses are built on top of it. If you like you can now code in JavaScript on your server, as well as in the client web browser using node.js. You can write in CoffeeScript which compiles to JavaScript (I’ve never seen the point of this). Chapters by Jonathan Barronville on node.js and Rebecca Murphey on Backbone highlight this growing maturity.
Anton Kovalyov writes on how JavaScript can be used as a functional language. Its illuminating to see this discussion alongside those looking at class inheritance-like behaviour. It highlights the risks of treating JavaScript as a language with class inheritance or being a “true” functional language. The risk being that although JavaScript might look like these things ultimately it isn’t and this may cause problems. For example, functional languages rely on data structures being immutable, they aren’t in JavaScript so although you might decide in your functional programming mode that you will not modify the input arguments to a function JavaScript will not stop from you from doing so.
The authors are listed with brief biographies in the dead zone beyond the index which is a pity because the biographies could very usefully been presented at the beginning of each chapter. They are: Anton Kovalyov , Jonathan Barronville, Sara Chipps, Angus Croll, Marijn Haverbeke, Ariya Hidayat, Daryl Koopersmith, Rebecca Murphey, Danial Pupius, Graeme Roberts, Jenn Schiffer, Jacob Thorton, Ben Vinegar, Rick Waldron, Nicholas Zakas. They have backgrounds with Twitter, Medium, Yahoo and diverse other places.
Beautiful JavaScript is a short, readable book which gives the relatively new JavaScript programmer something to think about.
Sep 09 2016
Book review: The Runner’s Handbook by Bob Glover
I took up running a year or so ago, on May 2nd 2015 – to be precise. I know this because the first thing I did when I started running was buy a fancy GPS runner’s watch. I started because I wasn’t very fit, and the gym was expensive and didn’t really fit around my available time. You can see some statistics on my running in my earlier blog post. Not long after getting the watch I bought some proper running shoes and since then I’ve lost 10kg and gone from “running” 5km in something over 30 minutes to running 10km in under 50 minutes.
But I’ve got a bit jaded and stuck in my running ways, so I thought I’d get The Runner’s Handbook by Bob Glover to help me with my next steps. There are a wide range of guides on the internet, but I’m a committed book buyer. Running is pretty much my only exercise, although I cycle 40 miles to work and back most weeks this is at a fairly leisurely pace. I’m aware that the lower half of my body, heart and lungs are quite fit but the rest is not – sort of like and inverted Popeye.
There is an early opportunity to test your current fitness with some standard exercises in The Runner’s Handbook. It turns out I’m in the top 25% for running speed over 1.5 miles which makes me an intermediate runner, but linger in the lower half for upper body strength and flexibility. To be honest, the push-ups test left me aching for several days after my over-enthusiastic efforts!
The Handbook is pretty comprehensive, 700 pages of comprehensive. It covers fitness, and the motivation for getting fitting, running programmes for different standards of runner, training for races, all the way up to the marathon distance and also training for speed. I read the first 250 or so pages covering these topics and a further 100 or so on Running form and Supplemental Training. I’ve skipped sections on food and drink, running environment, running lifestyle, special runners and illness and injury. I can always dip into them if the need arises.
The writing is very readable, Glover chatters his way through the book half in conversation with his co-author Jack Shepherd. Glover is a real enthusiast for his sport. He has trained many amateur runners, and you get a feel for New York state and the New York Marathon through his writing. He introduces some of the history of the modern running movement, dedicating the book to Fred Lebow who took the New York Road Runners Club from 200 members to over 30,000. He also took the New York Marathon out of Central Park and into the city, a pattern subsequent repeated around the world.
Glover also dedicates the book to Nina Kuscsik, who was instrumental in getting women admitted to longer distance races, starting with her own run in the 1972 Boston marathon, which was accompanied by significant male protest. I was bemused to read that the original sports bra was constructed in the late 1970s by stitching two jockstraps together.
As a result of reading the boo I have a set of stretches and upper body strengthening exercises to try out. The sections covering these are a bit repetitive and could really have done with some diagrams. Fortunately, having introduced dozens or exercises Glover does provide a sample work out with a more manageable set of things to do. I must admit one of my problems with running is that I try to go faster with every run I do. Reading The Handbook has backed me off from this futile quest. Now I can go out for a run and not feel I’ve failed if I don’t beat my previous fastest time.
The second thing I’ve learned is that I have to get a heart rate monitor! Okay, so Glover says I could do some crude manual measurement involving feeling my pulse and counting but where’s the fun in that? The idea being that you are in cardiovascular training mode when you reach something like 70-80% of your maximum heart rate. I have my eye on the Garmin Forerunner 235 or maybe the cheaper Garmin Forerunner 35. These both have light-based heart monitors under the wristband of the watch rather than the more usual (and cumbersome) chest monitors. By the way, you’re not going to read about the very latest gadgets in Glover’s book – this edition is from 1996 but it doesn’t feel at all dated.
This was the right sort of book for me, it is comprehensive and authoritative, it reads well and there are some things I can go away and do now to improve my running.
Aug 29 2016
Book review: Essential SQLAlchemy by Jason Myers and Rick Copeland
Essential SQLAlchemy by Jason Myers and Rick Copeland is a short book about the Python library SQLAlchemy which provides a programming interface to databases using the SQL query language. As with any software library there is ample online material on SQLAlchemy but I’m old-fashioned and like to buy a book.
SQL was one of those (many) things I was never taught as a scientific programmer, so I went off and read a book and blogged about it (rather more extensively than usual). It’s been a useful skill as I’ve moved away from the physical sciences to more data-oriented software development. As it stands I have a good theoretical knowledge of SQL and databases, and can write fairly sophisticated single table queries but my methodology for multi-table operations is stochastic.
I’m looking at using SQLAlchemy because it’s something I feel I should use, and people with far more experience than me recommend using it, or a similar system. Django, the web application framework, has its own ORM.
Essential SQLAlchemy is divided into three sections, on SQLAlchemy Core, SQL Alchemy ORM and Alembic. The first two represent the two main ways SQLAlchemy interacts with databases. The Core model is very much a way of writing SQL queries but with Pythonic syntax. I can see this having pros and cons. On the plus side I’ve seen SQLAlchemy used to write, succinctly, rather complex join queries. In addition, SQLAlchemy Core allows you to build queries conditionally which is possible by using string manipulation on standard queries but requires some careful thought which SQLAlchemy has done for you. SQLAlchemy allows you to abstract away the underlying database so that, in principle, you can switch from SQLite to PostgresQL seamlessly. In practice this is likely to be a bit fraught since different databases support different functionality. This becomes a problem when it becomes a problem. SQLAlchemy gives your Python programme a context for its queries which I can see being invaluable in checking queries for correctness and documenting the database the programme accesses. On the con side: I usually know what SQL query I want to write, so I don’t see great benefit in adding a layer of Python to write that query.
SQLAlchemy Object Relational Mapper (ORM) is a different way of doing things. Rather than explicitly writing SQL-like statements we are invited to create classes which map to the database via SQLAlchemy. This leaves us to think about what we want our classes to do rather than worry too much about the database. This sounds fine in principle but I suspect the experienced SQL-user will know exactly what database schema they want to create.
Both the Core and ORM models allow the use of “reflection” to build the Pythonic structures from a pre-existing datatabase.
The third part of the book is on Alembic, a migrations manager for SQLAlchemy, which is installed separately. This automates the process of upgrading your database to a new schema (or downgrading it). You’d want to do this to preserve customer data in a transactional database storing orders or something like that. Here I learnt that SQLite does not have full ALTER TABLE functionality.
A useful pattern in both this book and in Test-driven Development is to wrap database calls in their own helper functions. This helps in testing but it also means that if you need to switch database backend or the library you are using for access then the impact is relatively low. I’ve gone some way to doing this in my own coding.
The sections on Core and ORM are almost word for word repeats with only small variations to account for the different syntax between the two methods. Although it may have a didactic value this is frustrating in a book already so short.
Reading this book has made me realise that the use I put SQL to is a little unusual. I typically use a database to provide convenient access to a dataset I’m interested in, so I do a one off upload of the data, apply indexes and then query. Once loaded the data doesn’t change. The datasets tend to be single tables with limited numbers of lookups which I typically store outside of the database. Updates or transactions are rare, and if I want a new schema then I typically restart from scratch. SQLite is very good for this application. SQLAlchemy, I think, comes into its own in more transactional, multi-table databases where Alembic is used to manage migrations.
Ultimately, I suspect SQLAlchemy does not make for a whole book by itself, hence the briefness of this one despite much repeated material. Perhaps, “SQL for Python Programmers” would work better, covering SQL in general and SQLAlchemy as a special case.