This is a review of “Adapt: Why Success Always Starts With Failure” by Tim Harford which puts forward the thesis that “trial and error” is the only way forward for complex endeavours to succeed.
Opening up to trial and error is divided into three tasks:
- Providing scope for variation in what you do;
- Establishing whether or not a variant has been successful;
- Making sure that you have systems in place to cope with failure.
Each of these tasks is illustrated with a wide range of real-life examples: using the Iraq War to highlight the difficulties of running “trial and error” inside control structures that are designed to take in information, channel it to the top of the organisation and provide a channel back from the top to the bottom. Donald Rumsfeld, apparently, would not refer to the “insurgents” as “insurgents” so hobbling the US’s ability to fight an “insurgency”.
Alongside these major case studies are smaller ones, such as on Jamie Oliver’s school dinners which showed that feeding children healthy food at primary level led to measurably better outcomes in education and attendance than comparable groups not within the scheme, you can see the study here.
There is also a section on using a carbon tax to address anthropogenic climate change, this fits in as a way of making selection possible by providing a simple measure of “success”. Harford is scathing of the ”Merton Rule” which demands that new build of above a certain size generate 10% of their electricity onsite by renewable means. As put by Harford this means installing capacity rather than demonstrating capacity which has lead to the use of dual fuel systems (nominally able to take renewable fuel) that are ultimately only used with non-renewables so providing no benefit at all.
The Piper Alpha and Three Mile Island accidents are provided as examples of the importance of being able to fail safely, they didn’t or rather Piper Alpha didn’t – arguably Three Mile Island just about failed safely. This was linked to failings in the financial system where large organisations, such as Lehman Brothers failed in a matter of hours with administrators scrabbling around frantically to come up with a controlled-landing plan. This is failure at large scale, but there is also coping with failure at the personal scale. For example, using “Deal or No Deal” as a model system in which contestants can “lose” which changes their estimations of risk for subsequent play for the worse.
One issue with “trial and error” is that the proponents of any method are often so convinced of the value of their method that they feel it immoral to subject anyone to an “inferior” alternative in order to conduct a trial. This is highlighted with a story about Archie Cochrane, pioneer of the randomised control trial in medical studies. He had been running a study on coronary care, comparing home-based care to hospital care. This had met with some opposition, with medics insisting that the home-based arm of the trial was unethical because it was bound to be inferior. When results started to come in it turned out that one branch of the trial was inferior to the other – Cochrane misled his colleagues into believing it was the home-based arm that was inferior – they demanded that it should be closed down but were rather silent when he revealed that it was in fact the hospital-based arm of the trial that was inferior!
Harford also discusses funding for research, in particular that blue-skies research could not be valued because the outcomes were so uncertain, highlighting the success of the Howard Hughes Medical Institute which funds speculative biomedical research in the US. What he goes on to say is that the use of prizes is a way out of this impasse. Using as an example the Longitude Prizes, his presentation plays up the friction between Harrison and the Board of Longitude. The Academie Des Sciences also ran prizes but until recently the method had been out of favour for approaching 200 years. The recent revival has included things like the DARPA challenges for self-driving cars, Ansari X Prize, the Bill and Melinda Gates Foundation prize for vaccines and Netflix’s film selection challenge. These have been successful, however it’s difficult to see them finding more general favour in the academic community since the funding is uncertain and appears only after researchers have expended resources rather than receiving the resource before doing the work.
From a practical point of view “trial and error” happens in the private sector, if not within companies then between. In the voluntary sector it has taken some hold, for me some of the more compelling examples were by the “randomistas” studying the effectiveness of aid programmes. In the public sector “trial and error” is more difficult: there is less scope for feedback on the success of a trial – you can’t meaningfully count customers through the door, or profits made, so there is a need for proxy measures. Furthermore, the appearance of failure carries a high price in the political sphere. This is not to say it shouldn’t happen, simply that “trial and error” face particular challenges in this area.
I like the central thesis of the book, it fits with my training as a scientist; my field allows for more direct experimentation than a randomised trial but the principle is the same. It also has pleasing parallels with biological evolution, which Harford explicitly draws. The book is well referenced, in fact I hit the end unexpectedly as I was reading on a Kindle – I couldn’t “see” the length of the end notes!