Author's posts
Jul 21 2022
Book review: Data mesh by Zhamak Dehgani
This book, Data mesh: Delivering Data-Driven Value at Scale by Zhamak Dehghani essentially covers what I have been working on for the last 6 months or so, therefore it is highly relevant but I perhaps have to be slightly cautious in what I write because of commercial confidentiality.
The Data Mesh is a new design for handling data within an organisation, it has been developed over the last 3 or 4 years with Dehghani at the Thoughtworks consultancy at the core. Given its recency there are no Data Mesh products on the market so one is left build your own on the basis of components available.
To a large degree the data mesh is a conceptual and organisational shift rather than a technical shift, all the technical component parts are available for a data mesh, less programmatic glue to hold the whole thing together.
Data Mesh the book is divided into five parts, the first describes what a data mesh is in fairly abstract terms, the second explains why one might need a data mesh, the third and fourth parts about how to design the architecture of the data mesh itself, and the data products that make it up. The final part is on “How to get started” – how to make it happen in your organisation.
Dehghani talks in terms of companies having established systems for operational data (data required to serve customers and keep the business running such billing information and the state of bank accounts), the data mesh is directed at analytical data – data which is derived from the operational data. She uses a fictional company, Daff, Inc. that sounds an awful lot like Spotify to illustrate these points. Analytical data is used to drive machine learning recommender systems, for example, and better understanding of business, customer and operations.
The legacy data systems Data Mesh describes are data warehouses and data lakes where data is managed by a central team. The core issue this system brings is one of scalability, as the number of data sets grows the size of the central team grows, and the responsiveness of the system drops.
The data mesh is a distributed solution to this centralised system. Dehghani defines the data mesh in terms of four principles, listed in order of importance:
-
Domain Ownership – this says that analytical data is owned by the domains that generate it rather than a centralised data team;
-
Data as a product – analytical data is owned as a product, with the associated management, discoverability, quality standards and so forth around it. Data products are self-contained entities in their own right – in theory you can stand up the infrastructure to deliver a single data product all by itself;
-
Self-serve data platform – a self-serve data platform is introduced which makes the process of domain ownership of data products easier, delivering the self-contained infrastructure and services that the data product defines;
-
Federated computational governance – this is the idea that policies such as access control, data retention, encryption requirements, and actions such as the “right to be forgotten” are determined centrally by a governance board but are stored, and executed, in machine-readable form by data products;
For me the core idea is that of a swarm of self-contained data products which are all independent but by virtue of simple behaviours and some mesh spanning services (such as a data catalogue) provide a sum that is greater than the whole. A parallel is drawn here with domain-driven design and microservices, on which the data mesh is modelled.
I found the parts on designing the data mesh platform and data products most interesting since this is the point I am at in my work. Dehghani breaks the data mesh down into three “planes”: the infrastructure utility plane, the data product experience plane, and the mesh experience plane (this is where the data catalogue lives).
We spent some time worrying over whether it was appropriate to include data processing functionality in our data mesh – Dehghani makes it clear that this functionality is in scope, arguing that the benefit of the data product orientation is that only a small number data pipelines are managed together rather than hundreds or possibly thousands in a centralised scheme.
I have been spending my time writing code, which Dehghani describes as the “sidecar”, common code that sits inside the data product to provide standard functionality. In terms of useful new ideas, I have been worrying about versioning of data schema and attributes – Dehghani proposes that “bitemporality” is what is required here (see Martin Fowler’s blog post here for an explanation). Essentially bitemporality means recording the time at which schema and attributes were changed, as well as the time at which data was provided and recording the processing time. This way one can always recreate a processing step simply by checking which set of metadata and data were in play at the time (bar data being deleted by a data retention policy).
Data Mesh also encouraged me to decouple my data catalogue from my data processing, so that a data product can act in a self-contained way without depending on the data catalogue which serves the whole mesh and allows data to be discovered and understood.
Overall, Data Mesh was a good read for me in large part because of its relevance to my current work but it is also well-written and presented. The lack of mention of specific technologies is rather refreshing and means the book will not go out of date within the next year or so. The first companies are still only a short distance into their data mesh journeys, so no doubt a book written in five years time will be a different one but I am trying to solve a problem now!
Jul 10 2022
Book review: The Art of More by Michael Brooks
The Art of More by Michael Brooks is a history of mathematics written by someone whose mathematical ability is quite close to mine – that’s to say we did pretty well with maths at school but when we went to university we reached a level where we stopped understanding what we were doing and started just manipulating symbols according to a recipe.
The book proceeds chronologically starting with origins of counting some 20,000 years ago and finishing with information theory in the mid-20th century with chapters covering arithmetic, geometry, algebra, calculus, logarithms, imaginary numbers, statistics and information theory.
It is probably chastening to modern mathematicians and scientists that much of the early work in maths on developing the number system, including zero and negative numbers, was driven by accounting and banking. Furthermore, much of the early innovation came from China, India and the Middle East with Western Europe only picking up the ideas of zero and negative numbers in around the 13th century.
Alongside the development of the number system, the ancient Greeks and others were developing geometry, the ancient Greeks seemed to go off numbers when they discovered irrational numbers – those which cannot be expressed exactly as a ratio of integers! Geometry is essential for construction, surveying, navigation and mapmaking – sailors have often been competent mathematicians – through necessity. Geometry also plays a part in the introduction of accurate perspective in drawings and paintings.
Complementing geometry is algebra, developed in the Arabic world. Our modern algebraic notation did not come into being until the 16th century with the introduction of the equals sign and what we would understand as equations. Prior to this problems were expressed either geometrically or rather verbosely.
Leading on from algebra was calculus – the maths of change. It started sometime around the beginning of the 17th century with Kepler calculating the volumes of wine barrels whilst he was preparing for his wedding. There was further work on the infinitesimals through the century before the work by Newton and Leibniz who are seen as the inventors of calculus. I was struck here by how all the key characters in the development of calculus Newton, Leibniz, Fermat, Descartes and the Bernoullis all sounded like deeply unpleasant men. Is this the result of the distance of history and the activities of various proponents for and against in the intervening centuries? Or were they really just deeply unpleasant men?
Doing a lot of calculation started to become a regular occurrence for sailors, as well as people such as Kepler and Newton working on the orbits of various celestial bodies. John Napier’s invention of logarithms and his tables of logarithms, published in 1614 greatly simplified calculations. It converted multiplication and division into addition and subtraction of values looked up in his tables of logarithms. The effort to create the tables was massive, it took 20 years for Napier to prepare his first set of tables, containing millions of values. Following Napier’s publication in 1614 logarithms reached their modern form (including natural logarithms) by 1630. In addition mechanical calculating devices like the slide rule were quickly invented. I grew up in a house with slides rules, although by the time I was old enough to appreciate them electronic calculators had taken over. Napier was also an early promoter of the modern decimal system. Logarithms also link to exponential growth, highly relevant as we still wait for the COVID pandemic to subside.
Historically the next area of maths is the invention of imaginary numbers, if you don’t know what these are then I’m not going to be explain in the space of a paragraph! There is a link here with natural logarithms through Euler’s identity which somewhat ridiculously manages to link e, pi and i in one really short equation. I was not previously familiar with Charles Steinmetz who introduced complex numbers into the analysis of electrical circuits responding to alternating currents – although it is a very elegant way of handling the problem and a method I used a lot at university. Largely when we talk about complex numbers we are discussing the addition of i, the square root of -1, to our calculations. But there are additionally quaternions, invented by William Hamilton, which add three complex numbers: i,j and k to the real numbers but the limit is octonions – a system of seven complex numbers and the real numbers. I am curious as to why we cannot have more than 7 flavours of complex numbers.
Statistics is my area of mathematics, I’m a member of the Royal Statistical Society. I think the thing I learned from this chapter was that the word "statistics" has its origins in German and "facts about the state". I quite liked Brooks’ description of p-values which seemed particularly clear to me. Brooks highlights some of the sordid eugenicist history of statistics, as well as the more enlightening work of Florence Nightingale and others.
The book finishes with a chapter on information theory, largely based on the work of Claude Shannon but with roots in the work of Leibniz and George Boole. George Boole invented his Boolean logic in an attempt to understand the mind in the mid-19th century but his work on "binary" logic was neglected for 70 or so years until it was revived by Shannon and other pioneers of early computing.
This is a fairly informal history of mathematics, I found it very readable but it includes a number of equations which might put off the completely non-mathematical.
Jun 12 2022
Book review: Play it Loud by Brad Tolinski and Alan Di Perna
I took up the guitar a few years ago, and play in the manner described by Kurt Vonnegut, that’s to say with little skill but expanded horizons. I read The Birth of Loud by Ian S. Port a while back and Play it loud by Brad Tolinska and Alan Di Perna is in a similar vein, a book about the electric guitar and the music that came from it. Whilst The Birth of Loud focused on Leo Fender and Les Paul and a period from the early fifties to the mid-sixties Play it Loud starts earlier and extends later and is broader in scope.
Play it Loud is divided into chapters which typically cover one or two people and one or two guitars, each illustrating a technical innovation or change in musical style. Broadly each chapter follows on from the previous one in time, taking us from the 1920s and thirties in the first chapter through to around 2015 by the end. It finishes with a timeline, which I liked.
The book starts with George Beauchamp in the 1920s and the first guitar pickups designed to pickup the vibration of the strings rather than the vibration of the guitar body, this followed the invention earlier in the century of the electronic valve amplifier and the paper cone speaker – both prerequisites for useful electric guitars. Guitars had been around for some time, and in the twenties guitar-based Hawaiian music was popular in the US. Hawaiian stringed music had its roots in Portuguese sailors in the 18th century. Beauchamp with Rickenbacker produced the first electric guitar based on this technology, the A-32 ‘Frying Pan’ in 1932. This was a cast-aluminium lap-steel style guitar.
The next development was the Gibson ES-150 in 1936 with a bar pickup that sat under the strings, rather than over them as for the Beauchamp pickup, ES stands for electric Spanish – it was the first of its kind. The guitar was made popular by the endorsement of Charles Christian, a Jazz guitarist, who was considered better than Django Reinhardt and Les Paul at the time. He was to die at the age of 25 of tuberculosis. This type of endorsement is a recurring theme, celebrated musician endorsements are massively valuable to guitar companies.
By the early fifties a number of people had realised that the guitar body was largely a place to hang strings and pickups and no longer needed to be hollow – the hollow chamber of the guitar is the amplifier in an acoustic guitar. Thus was born the Fender Telecaster and then the Stratocaster and, at Gibson, the "Les Paul". This is the period covered in The Birth of Loud. It is worth noting that Les Paul was one of a breed of musician/technician who were to recur with Eddie Van Halen and Steve Vai in the late seventies and early eighties, who pushed forward the development of the guitar. I hadn’t realised but the very futuristic looking Gibson Flying V and Explorer models were born in this period of the late fifties – they were unpopular then but saw a resurgence in the early eighties.
The new solid-body electric guitar, Fender’s Precision Bass and new amplifiers meant that by the early sixties an electric four-piece band could fill a hall with sound (previously this required a big band or an orchestra and by the late sixties Jimi Hendrix could make rather more noise than that. At this point Tolinska and Perna highlight how the electric guitar fitted in with protest and counter-cultural – also citing Bob Dylan and his infamous switch to the electric guitar. His electric set at the Newport Country Festival was so short because it had only been brought together a few days earlier.
By the late sixties the quality of Fender and Gibson’s offerings was dropping, and players like Eric Clapton started looking for the discontinued Les Paul models. The drought in good quality guitars was to extend for a while, in the mid-sixties, when Fender and Gibson were dropping in quality Japan was producing a large number of cheap, low quality guitars. In this environment, an after-market parts market grew with names we recognise today like Seymour Duncan, Jackson Charvel and Larry Dimarzio. Japan was to later produce high quality guitars – Steve Vai chose Ibanez to make his signature model.
The book finishes with a chapter centred on Jack White of The White Stripes, and his enthusiasm for very retro, and not highly regarded guitars and amplifiers. This represents a thread running through the book, guitars are more than their technical components – the choice of guitar says something about what a players intentions are. So Eric Claption took up the discontinued Les Paul to ape the earlier blues players. The punk and garage bands were trying to get away from those blues roots, and cheap, plastic guitars fitted that vibe. They were also trying to get away from the comfortable middle class hobby guitarists (like me) who would happy spend a couple of thousand dollars on a signature or classic guitar because they could.
In common with my reading of The Birth of Loud I found myself googling for the guitars mentioned and thinking I should get one!
May 25 2022
A way of working: data science
I am about to take on a couple of data science students from Lancaster University for summer projects, from past experience I always spend some time at the beginning of such projects explaining how I work with the expectation that they will at least take some notice if not repeat my methodology exactly. This methodology evolves slowly over time as I learn new things and my favoured technologies change.
Typically I develop on a Windows laptop but I use the git-bash prompt as my shell for typing in commands – this is a Linux-like terminal which I adopted after working with developers who mainly used Linux and also because I was familiar with the Unix style commandline from before the time on Linux. You can do a lot from the commandline in data science – Data Science at the Command Line by Jeroen Janssens is an excellent introduction.
I use Docker containers a bit to spin up local versions of services which are difficult to run on Windows (things like Airflow and Linkedin DataHub), some people develop entirely inside Docker containers to reduce dependency issues and make deployment of code easier.
I work pretty much entirely in Python for data processing and analysis although I generate CSV files which I load to Tableau for visualisation. I tend not to try complex processing in Tableau since I find the GUI inconvenient and confusing for such work. I use the Anaconda distribution of Python, originally because I liked that it came packaged with a load of useful libraries for data science and it handled virtual environments and installation of more tricky packages better than plain Python. It may be worth revisiting this decision. I have recently shifted my code to Python 3.9.
For a piece of work I will usually set up a Python project which can be “installed”. This blog post explains a standard structure for Python projects. I aim to use Python virtual environments on a per project basis but sometimes I fail. Typically, I will write Python modules that provide functions but also have a simple command line interface which takes two or three positional parameters. You can see this in action in the git repo here which I share as a template for myself and others!
To date I have picked up commandline arguments using sys.argv
I should probably use one of the libraries to make these commandline interfaces better, there is a blog post here which compares the built-in argparse library with click and docopt. I think I might check out click for future projects.
As well as running commandline scripts I use tests to develop analysis, as well as being good software development practice, test runners make a convenient way to run arbitrary functions in a code base. I prefer to use the unittest built-in library but I’ve started using pytest for a recent project. I wrote a blog post about writing tests, since I wrote it I have learned about test mocks and pytest’s fixture functionality.
I have a library of general utilities for interacting with databases, setting up logging and writing dictionaries which I wrote because I found I was doing these things repeatedly and making my own library allowed me to forgot some of the boilerplate code required to do these things. The key utilities are included with the repo attached to this blog.
I’ve been using Visual Code as my editor for some time now, I prefer not to use full blown IDEs because I find they present more functionality than I can cope with. I think this is as a result of coding in Java using Eclipse and C# .net in Visual Studio. In any case Visual Code starts as a nice enough code editor but has been sneaking in more IDE functionality via extensions.
The extensions I use heavily in Visual Code are Python and Pylance – the Python language server provides type-hinting support. I wrote about type-hinting in Python here. I also use Rainbow CSV for when I am editing or viewing CSV files.
I could use Visual Code for accessing git, my preferred source control system, but instead I use GitKraken which has a very nice GUI interface. Since I am usually working by myself my git usage is very simple, I typically have one branch onto which I make many small commits. I have recently started working with a team where I am using feature-based branches which get merged by pull requests – this was a bit of a culture shock.
As a result of working with other people on a new project I have started using some technologies which I will just mention here. I run the black formatter, as well pylint and flake8. Black just reformats my code files when I save them and can largely be ignored. Flake8 is fairly easy to satisfy although I spent a lot of time addressing line length issues. Pylint generates quite a few warnings which I attend to but sometimes ignore.
I have also started using Make files and Azure Devops pipelines for running common tasks on my code (tests, cleanup, setting up infrastructure, linting).
Outside technology, I have a very long established method of working using a monthly Word document as a notebook, I describe it here. I tend to prefix file names with ISO8601 format dates (2022-05-22) this means that if I created a Tableau workbook or an Excel worksheet I can link it easily to what I was writing in my notebook and the status of the appropriate git repo at that point in time.
I’ve incorporated all the code related elements mentioned above in this ways-of-working-data-science git repository.
May 21 2022
Book review: Pale Rider – The Spanish Flu of 1918 by Laura Spinney
Pale Rider: The Spanish Flu and How it Changed the World by Laura Spinney is obviously very topical at the moment, it was published in June 2017 which makes it more striking how relevant it is than if it had been published in the last two years.
The book starts with an overall chronology of the 1918 flu pandemic before return to specific themes, generally through the medium of personal accounts or individual incidents. It is worth highlighting that the "Spanish" label is highly misleading, essentially the 1918 flu pandemic arose somewhere between the American mid-West, Northern France on the battle fields of the First World War or, a remote possibility, in China. Spinney discusses the link with viruses found in wildlife and livestock.
Initial estimates as to the death toll of the 1918 flu pandemic were around 25 million but these have been revised upwards recently to up to 100 million. Furthermore, the 1918 flu pandemic largely took place over September to December in 1918 with smaller waves in the spring of 1918 and in the following spring and with there were some variations by geography as to exactly when the worst effects were felt. So 1918 flu pandemic was a shorter, more devastating pandemic than the 2020 covid pandemic (which has killed around 3 million of a much large population). This was against the back drop of the First World War which killed more people in Europe than the pandemic, although around the world Europe was the exception with more killed by pandemic in all other continents.
The context for the 1918 flu pandemic was different too, the 19th century had been one of epidemics driven by industrialisation and the associated urbanisation. Amongst those were flu pandemics and 1830 and 1890. The 1890 "Russian" flu pandemic, was the first to be measured as a pandemic. The 1918 pandemic was at a time when the germ theory of disease was being developed, and the value of hygiene was understood. However, viral diseases were not well understood and it was not until the 1930s that the mechanism of transmission for flu was discovered with the first flu vaccines coming in 1936. It was not until the 1950s that it was confirmed as a viral disease. The symptoms of this flu pandemic were quite different from those of the covid pandemic with a mahogany colouration forming on the cheekbones that spread progressively until death, teeth and hair falling out and delirium (leading to suicide).
The health measures taken to address the 1918 pandemic were not that different from those used recently with sanitary cordons and quarantine used extensively. Religious ceremonies were exempt from restrictions in Spain leading to more cases. Closing schools was argued over with those in favour seeing schools as better for the monitoring of outbreaks, communication of health information, and offering better sanitary conditions, and food, to children. Starvation was a problem with supply chains effected from start to finish.
It is interesting to see the varying responses of Australia and New Zealand between the 1918 pandemic and the covid pandemic, Australia isolated in 1918, as it did in the covid pandemic but in 1918 they did not. The disproportionate impacts of the 1918 pandemic were also in evidence, with the recent Italian immigrants to the US, India and remote native American communities in Alaska very badly effected with mortality rates of up to 40%.
The pandemic had arguable impacts in world affairs, Woodrow Wilson had a serious stroke probably as a result of a bout of flu, and was not present to limit the war reparations against Germany.The independence movement in India grew. The flu impacted people in their twenties and thirties quite heavily, leaving behind a generation of orphans – their treatment was handled with new legislation by France and England. There was a post-pandemic (and war) fertility boom.
Despite the enormous death toll, even compared to the First World War, the 1918 pandemic appeared to have little impact on art and literature although scholars will look for signs of post-viral fatigue in paintings. Spinney argues this is because insufficient time has passed, noting that there are approaching 80,000 books on the First World War and but only 400 on the 1918 pandemic – but this number is growing rapidly. It has made me wonder about the lost siblings, in my grandparents generation which were never spoken of – similarly the absence of stories from fighting age men of the Second World War. Essentially these stories were too painful to handle at a human, personal level and the culture in the UK at least would not have been to speak about them. So it is left to historians and the passage of time for the stories to come to light.
A second factor, proposed by psychologists, is that pandemics lack a good story line with a clear beginning and end and a selection of heroes – unlike the First World War.
The Pale Rider is very readable, it is difficult to use the word "enjoy" regarding a book which tells of the deaths of 100 million people. I was struck by how relevant the 1918 flu pandemic was to our current situation with the disparate impacts depending on country and social conditions, the debates over school closures, the dedication of medical staff, the measures to address the pandemic and the debates over the compliance with public health measures. The covid pandemic is different – it has played out over a longer period, it has a far lower death toll, our medical knowledge is much improved, our world is much more connected but nevertheless The Pale Rider feels very prescient.