Category: Deva Data

Book review: Exercises in Programming Style by Cristina Videira Lopes

exercises_in_programming_styleRecently our CIO has allowed us to claim one technical book per quarter on expenses as part of our continuing professional development. Needless to say since I was buying these books already I leapt at the opportunity! The first fruit of this policy is Exercises in Programming Style by Cristina Videira Lopes.

The book is modelled on Raymond Queneau’s book Exercises in Style which writes the same story in 99 different ways.

Exercises in Programming Style takes a simple exercise: counting the frequency of words in a file and reporting the top 25 words, and writes a program to do this in forty different styles spread across 10 sections.

The sections are historical, basic styles, function composition, objects and object interaction, reflection and metaprogramming, adversity, data-centric, concurrency, interactivity, and neural networks. The section on neural networks breaks the pattern with example programmes only handling small elements of the word frequency problem. The sections vary in size, the objects and object interaction is the largest.

Lopes talks about styles in terms of constraints, for example in the “Good old times” historical style there are no named variables and limited memory, in the “Letterbox” style objects pass messages to one another to prompt actions.

The shortest implementation of the example is in the “Code Golf” chapter with just six lines, other examples run to a couple of pages – a hundred lines or so. Lopes is somewhat opinionated as to style but quite balanced providing reasoning where unusual styles may be appropriate. This was most striking for me in the section on “Adversity” which discussed error-handling. Lopes suggests that a “Passive Aggressive” style with error handling all occurring at the top level in a try-except block is better than my error handling to date which has been more in the “Constructivist” (trapping errors but proceeding with defaults) or “Tantrum”(catching errors and refusing to proceed) style.

Sometimes the fit to the style format feels slightly forced, in particular in the chapters relating to neural networks but in the Data-Centric chapter I learnt how to implement spreadsheet-like functionality in Python which is interesting.

I’ve been programming for about 40 years but as a physical scientist analysing data or trying out numerical models rather than a professional developer. Exercises  brings together many bits and pieces of things I’ve learnt, often in the context of different languages. For a while I’ve had the feeling that I didn’t need to learn new languages, I needed to learn how to apply new techniques in my favoured language (Python) and this book does exactly that.

Once again I was bemused to see Python’s “gentleman’s agreement” methodology over certain matters. By convention methods of a class whose name start with an underscore are considered private but this isn’t enforced so if you really want to use a “private” method just go ahead. Similarly many object-oriented languages support a “this” keyword for the members of a class to refer to themselves. Python uses “self” but only by convention, you can specify “self” is “me” or whatever other name you please. The style format provides a nice way of demonstrating a feature of Python in a non-trivial but minimal functioning manner.

It is somewhat chastening to discover that many of the styles in this book had their abstract origins in the 1960s, shortly before I was born, entered experimental languages such as Smalltalk in the seventies where I would have read about them in computer magazines and became mainstream in the eighties and nineties in languages like C++, Java and Python, not long after the start of my programming career. Essentially, most of the action in this book has taken place during my lifetime! In physics we are used to the figures in our eponymous laws (Newton, Maxwell etc) being very long dead. In computing the same does not apply.

What I take away from Exercises is that to a fair degree modern programming languages can be used to implement a wide range of the ideas generated in computer science over the last 50 or so years so in improving your skill as a programmer learning new languages is not the highest priority. There is a benefit to learning new techniques in a language in which your are familiar. Clearly some languages are designed heavily to support a certain style, for example Haskell and functional programming but I found it easier to understand monads explained in the context of Python than in Haskell where everything was alien.

Exercises is surprisingly readable, the programs are well-documented and Lopes’ text is short but clear with references to further reading. It stands alongside Seven databases in Seven Weeks by Eric Redmond and Jim R. Wilson as a book that I will rave about and recommend to everyone!

Unit testing in Python using the unittest module

The aim of this blog post is to capture some simple “recipes” on testing code in Python that I can return to in the future. I thought it would also be worth sharing some of my thinking around testing more widely. The code in this GitHub gist illustrates the testing features I mention below.

My journey with more formal code testing started about 10 years ago when I was programming in Matlab. It only really picked up a couple of years later when I moved to work at a software startup, coding in Python. I’ve read a couple of books on testing (BDD in action by John Ferguson Smart, Test-Driven Development with Python by Harry J.W. Percival) as well as Working effectively with legacy code by Michael C. Feathers which talks quite a lot about testing. I wrote a blog post a number of years ago about testing in Python when I had just embarked on the testing journey.

As it stands I now use unit testing fairly regularly although the test coverage in my code is not great.

Python has two built-in mechanisms for carrying out tests, the doctest and the unittest modules. Doctests are added as comments to the top of function definitions (see lines 27-51 in the gist above). They are run either by adding calling doctest.testmod() in a script, or adding doctest to a Python commandline as shown below.

python -m doctest -v tests.py

Personally I’ve never used doctest – I don’t like the way the tests are scattered around the code rather than being in one place, and the “replicating the REPL” seems a fragile process but I include them here for completeness.

That leaves us with the unittest module. In Python it is not unusual use a 3rd party testing library which runs on top of unittest, popular choices include nosetests and, more recently, pytest. These typically offer syntactic sugar in terms of making tests slightly easier to code, and read. There is also additional functionality in writing and running test suites. Unittest is based on the Java testing framework, Junit, as such it inherits an object-oriented approach that demands tests are methods of a class derived from unittest.TestCase. This is not particularly Pythonic, hence the popularity of 3rd party libraries.

I’ve used nosetest for a while, now but it looks like its use is no longer recommended since it is no longer being developed. Pytest is the new favoured 3rd party library. Personally, I’m probably going to revert to writing tests using unittest. As a result of writing this blog post I will probably stop using nosetests as a test runner and simply use pure unittest.

The core of unittest is to call the function under test with a set of parameters, and check that the function returns the correct response. This is done using one of the assert* methods of the unittest.TestCase class. I nearly always end up using assertEquals. This is shown in minimal form in lines 67-76 above.

With data science work we often have a list of quite similar tests to run, calling the same function with a list of arguments and checking off the response against the expected value. Writing a function for each test case is a bit laborious, unittest has a couple of features to help with this:

  • subTest puts all the test cases into a single test function, and executes them all, reporting only those that fail (see lines 82-90). This is a compact approach but not verbose. Note that nosetests does not run subTest correctly, it being a a feature of unittest only introduced in Python 3.4 (2014);
  • alternatively we can use a functional programming trip to programmatically generate test functions and add them to the unittest.TestCase class we have derived, this is shown on lines 105-116;

Sometimes you write tests that you don’t always want to run either because they are slow to run, or because you used them in addressing a particular problem and now want to keep for the purposes of documentation but not to run. Decorators in unittest are used to skip tests, @unittest.skip() is the simplest of these, this is an “opt-out”.

Once you’ve written your tests then you need to run them. I liked using nosetests for this, if you ran it in a directory then it would trundle off and find any files that looked like they contained tests and run them, reporting back on the results of the tests.

Unittest has some test discovery functionality which I haven’t yet explored, the simplest way of invoking it is simply by running the script file using:

python tests.py -v

The -v flag indicates that output should be “verbose”, the name of each test is shown with a pass/fail status, and debug output if a test fails. By default unittest shows print messages from the test functions and the code being tested on the console, and also logging messages which can confuse test output. These can be supressed by running tests with the -b flag at the commandline or setting the buffer argument to True in the call to unittest.main(). Logging messages can be supressed by adding a NullHandler, as shown in the gist above on lines 188-119.

The only functionality I’ve used in nosetests and can’t do using pure unittest is re-running only those tests that failed. This limitation could be worked around using the -k commandline flag and using a naming convention to track those test still failing.

Not covered in this blog post are the setUp and tearDown methods which can be run before and after each test method.  

I hope you found this blog post useful, I found writing it helpful in clarifying my thoughts and I now have a single point of reference in future.

Type annotations in Python, an adventure with Visual Studio Code and Pylance

I’ve been a Python programmer pretty much full time for the last 7 or 8 years, so I keep an eye out for new tools to help me with this. I’ve been using Visual Studio Code for a while now, and I really like it. Microsoft have just announced Pylance, the new language server for Python in Visual Studio Code.

The language server provides language-sensitive help like spotting syntax errors, providing function definitions and so forth. Pylance is based on the type checking engine Pyright. Python is a dynamically typed language but recently has started to support type annotations. Dynamic typing means you don’t tell the interpreter what “type” a variable is (int, string and so forth) you just use it as such. This contrasts with statically typed languages where for every variable and function you define a type as you write your code. Type annotations are a halfway house, they are not used by the Python interpreter but they can be used by tools like Pylance to check code, making it more likely to run correctly on first go.

Pylance provides a range of “Intellisense” code improvement features, as well as type annotation based checks (which can be switched off).

I was interested to use the type annotations checking functionality since one of the pleasures of working with statically typed languages is that once you’ve satisfied your compiler that all of the types are right then it has a better chance of running correctly than a program in a dynamically typed language.

I will use the write_dictionary function in my little ihutilities library as an example here, this function is defined in the file io_utils.py. The appropriate type annotation for write_dictionary is:

def write_dictionary(filename: str, data: List[Dict[str,Any]], 
append:Optional[bool]=True, delimiter:Optional[str]=",") -> None:

Essentially each variable is followed by a colon, and then a type (i.e str). Certain types are imported from the typing library (Any, List, Optional and Dict in this instance). We supply the types of the elements of the list, or dictionary. The Any type allows for any type. The Optional keyword is used for optional parameters which can have a default value. The return type is put at the end after the ->. In a *.pyi file described below, the function body is replaced with ellipsis (…).

Actually the filename type hint shouldn’t be string but I can’t get the approved type of Union[str, bytes, os.PathLike] to work with Pylance at the moment.

As an aside Pylance spotted that two imports in the io_utils.py library were unused. Once I’d applied the type annotation to the function definition it inferred the types of variables in the code, and highlighted where there might be issues. A recurring theme was that I often returned a string or None from a function, Pylance indicated this would cause a problem if I tried to measure the length of None.

There a number of different ways of providing typing information, depending on your preference and whether you are looking at your own code, or at a 3rd party library:

  1. Types provided at definition in the source module – this is the simplest method, you just replace the function def line in the source module file with the type annotated one;
  2. Types provided in the source module by use of *.pyi files – you can also put the type-annotated function definition in a *.pyi file alongside the original file in the source module in the manner of a C header file. The *.pyi file needs to sit in the same directory as its *.py sibling. This definition takes precedence over a definition in the *.py file. The reason for using this route is that it does not bring incompatible syntax into the *.py files – non-compliant interpreters will simply ignore *.pyi files but it does clutter up your filespace. Also there is a risk of the *.py and *pyi becoming inconsistent;
  3. Stub files added to the destination project – if you import write_dictionary into a project Pylance will highlight that it cannot find a stub file for ihutilities and will offer to create one. This creates a `typings` subdirectory alongside the file on which this fix was executed, this contains a subdirectory called `ihutilities` in which there are files mirroring those in the ihutilities package but with the *.pyi extension i.e. __init__.pyi, io_utils.py, etc which you can modify appropriately;
  4. Types provided by stub-only packages PEP-0561 indicates a fourth route which is to load the type annotations from a separate, stub only, module.
  5. Types provided by Typeshed – Pyright uses Typeshedfor annotations for built-in and standard libraries, as well as some popular third party libraries;

Type annotations were introduced in Python 3.5, in 2015, so are a relatively new language feature. Pyright is a little over a year old, and Pylance is a few days old. Unsurprisingly documentation in this area is relatively undeveloped. I found myself looking at the PEP (Python Enhancement Proposals) references as often as not to understand what was going on. If you want to see a list of relevant PEPs then there is a list on the Pyright README.md, I even added one myself.

Pylance is a definite improvement on the old Python language server which was itself more than adequate. I am currently undecided about type annotations, the combination of Pylance and type annotations caught some problems in my code which would only come to light in certain runtime circumstances. They seem to be a bit of an overhead which I suspect I would only use for frequently used library routines, and core code which gets run a lot and is noticeable by others when it fails. I might start by adding in some *.pyi files to my active projects.