Dr Administrator

Author's posts

Economics: The physics of money?

Today I’m off visiting the economists, this is a bit of a different sort of visit since I haven’t found that many to follow on twitter, instead I must rely on their writings.

I’ve been reading Tim Harford’s “The Undercover Economist” which is the main topic of this post, in the past I’ve also read “Freakonomics” by Levitt and Dubner. Harford’s book is more about classical economics whilst “Freakonomics” is more about the application of quantitative methods to the analysis of social data. This is happy territory for a physicist such as myself: there are numbers, there are graphs and there are mathematical models.

David Ricardo pops up a few times, it would seem fair to compare him to the Newton of economics, he lived 1772-1823.

I learnt a whole bunch of things from Tim Harford’s book, including what shops are up to: working out how to persuade everyone to pay as much as they are willing to pay, by means such as “Value” and “Finest” ranges whose price differences don’t reflect their cost differences, similar pricing regimes are found in fancy coffee. In a way income tax bypasses this, it replaces willingness to pay with ability to pay – I’m sure shops would love to be able to do this! Scarcity power allows a company to change more for its goods or services, and a company’s profits are indication that this might be happening.

Another important concept is market “efficiency”: perfect efficiency is achieved when no-one can be made better off without someone else losing out, this is not the same as fairness. In theory a properly operating market should be efficient but not necessarily fair. Externalities are the things outside the market to which a monetary value needs to be attached in order for them to be included in the efficiency calculation, this includes things like pollution and congestion in the case of traffic. This sounds rather open-ended since I imagine externality costing can be extremely disputed.

There’s an interesting section on inside / asymmetric information, and how this prevents markets from operating properly. The two examples cited are second-hand car sales and health insurance, in the first case the seller knows the quality of the car he his selling whilst the buyer struggles to get this information. Under these circumstances the market struggles to operate efficiently because the buyer doesn’t know whether he is buying a ‘peach’ (a good car) or a ‘lemon’ (a bad car), this reduces the amount he is willing to pay – the seller struggles to find a mechanism to transmit trusted quality information to the buyer. Work on information asymmetry won a Nobel Prize for Economics for George Akerlof, Michael Spence, and Joseph Stiglitz in 2001.

In the second case, health insurance, the buyer purportedly knows the risk they present whilst the seller doesn’t, this doesn’t quite ring true to me, it seems the observed behaviour in the US private healthcare system matches this model though. In a private insurance system the people who are well (and are likely to remain well) will not buy insurance, whilst those that believe themselves to be ill, or at serious risk of being ill will be offered expensive insurance because there is not a large population of healthy buyers to support them. Harford recommends the Singapore model for health care, which has compulsory saving for health care costs, price controls and universal insurance for very high payouts. This gives the consumer some interest in making most efficient use of the money they have available for health care.

You might recall the recent auctions of radio spectrum for mobile phone and other applications, this turns out to be a fraught process for the organiser – in the US and New Zealand this process went poorly with the government receiving few bids and less cash then they expected. In the UK the process went very well for the government, essentially through a well designed auction system. The theoretical basis for such auctions is in game theory, with John von Neumann and John Nash important players in the field (both recognised as outstanding mathematicians).

Tim Harford did wind me up a bit in this book, repeatedly referring to the market as “the world of truth”, and taxes as “lies”. This is a straightforward bit of framing: that’s to say the language used means anyone arguing against him is automatically in the “arguing against the truth” camp irrespective of the validity of the arguments. The formulation that taxes represent information loss is rather more interesting and he seems to stick with this more often than not. In this instance I feel the “world of truth” is ever so slightly tongue in cheek, but in the real world free-markets are treated very much as a holy “world of truth” by some political factions with little regard to the downsides: such as a complete ignorance of fairness, the problems of inside information and the correct costing of externalities.

A not inconsiderable number of physicists end up doing something in finance or economics: As Tom Lehrer says in the preamble to “In old Mexico”: “He soon became a specialist, specializing in diseases of the rich”. It turns out you get paid more if the numbers you’re fiddling with represent money, rather than the momentum of an atom. Looking at these descriptions of economic models, I can’t help thinking of toy physics models which assume no friction, and are at equilibrium. These things are very useful when building understanding, but for practical applications they are inadequate. Presumably more sophisticated economic models take this things into account. From a more physical point of view, it doesn’t seem unreasonable to model economics through concepts such as conservation (of cash) and equilibrium, but physics doesn’t have to concern itself with self-awareness – i.e. physical systems can’t act wilfully once given knowledge of a model of their behaviour. I guess this is where game theory comes in.

The interesting question is whether I should see economics as a science, like physics, which is used by politicians for their own ends or whether I should see them as being rather more on the inside. Economics as a whole seems to be tied up with political philosophy. Observing economists in the media there seem to be much wider range of what is considered possibly correct than you observe in scientific discussion.

Opinion polls and experimental errors

I thought I might make a short post about opinion polls, since there’s a lot of them about at the moment, but also because they provide an opportunity to explain experimental errors – of interest to most scientists.

I can’t claim great expertise in this area, physicists tend not to do a great deal of statistics unless you count statistical mechanics which is a different kettle of fish to opinion polling. Really you need a biologist or a consumer studies person. Physicists are all very familiar with experimental error, in a statistical sense rather than the “oh bollocks I just plugged my 110 volt device into a 240 volt power supply” or “I’ve dropped the delicate critical component of my experiment onto the unyielding floor of my lab” sense. 
There are two sorts of error in the statistical sense: “random error” and “systematic error”. Let’s imagine I’m measuring the height of a group of people, to make my measurement easier I’ve made them all stand in a small trench, whose depth I believe I know. I take measurements of the height of each person as best I can but some of them have poor posture and some of them have bouffant hair so getting a true measure of their height is a bit difficult: if I were to measure the same person ten times I’d come out with ten slightly different answers. This bit is the random error.

To find out everybody’s true height I also need to add the depth of the trench to each measurement, I may have made an error here though – perhaps a boiled sweet was stuck to the end of my ruler when I measured the depth of the trench. In this case my mistake is added to all of my other results and is called a systematic error. 

This leads to a technical usage of the words “precision” and “accuracy”. Reducing random error leads to better precision, reducing systematic error leads to better accuracy.
This relates to opinion polling: I want to know the result of the election in advance, one way to do this would be to get everyone who was going to vote to tell me in advance what their voting intentions. This would be fairly accurate, but utterly impractical. So I must resort to “sampling”: asking a subset of the total voting population how they are going to vote and then by a cunning system of extrapolation working out how everybody’s going to vote. The size of the electorate is about  45million, the size of a typical sampling poll is around 1000. That’s to say one person in a poll represents 45,000 people in a real election.
To get this to work you need to know about the “demographics” of your sample and the group you’re trying to measure. Demographics is stuff like age, sex, occupation, newspaper readership and so forth – all things that might influence the voting intentions of a group. Ideally you want the demographics of your sample to be the same as the demographics of the whole voting population, if they’re not the same you will apply “weightings” to the results of your poll to adjust for the different demographics. You will, of course, try to get the right demographics in the sample, but people may not answer the phone or you might struggle to find the right sort of person in the short time you have available.The problem is you don’t know for certain what demographic variables are important in determining the voting intentions of a person. This is a source of systematic error, and some embarrassment for pollsters. 
Although the voting intentions of the whole population may be very definite (and even that’s not likely to be the case), my sampling of that population is subject to random error. You can improve your random error by increasing the number of people you sample but the statistics are against you because the improvement in error goes as one over the square root of the sample size. That’s to say a sample which is 100 times bigger only gives you 10 times better precision. The systematic error arises from the weightings, problems with systematic errors are difficult to track down in polling as in science.
So after this lengthy preamble I come to the decoration in my post, a graph: This is a representation of a recent opinion poll result shown in the form of probability density distributions, the area under each curve (or part of each curves) indicates the probability that the voting intention lies in that range. The data shown is from the YouGov poll published on 27th April. The full report on the poll is here, you can find the weighting they applied on the back page of the report. The “margin of error” of which you very occasionally hear talk gives you a measure of the width of these distributions (I assumed 3% in this case, since I couldn’t find it in the report), the horizontal location of the middle of each peak tells you the most likely result for that party.

For the Conservatives I have indicated the position of the margin of error, the polling organisation believe that the result lies in the range indicated by the double headed arrow with 95% probability. However there is a 5% chance (1 in 20) that it lies outside this range. This poll shows that the Labour and Liberal Democrat votes are effectively too close to call and the overlap with with the Conservative peak indicates some chance that they do not truly lead the other two parties. And this is without considering any systematic error. For an example of systematic error causing problems for pollsters see these wikipedia article on The Shy Tory Factor.

Actually for these data it isn’t quite as simple as I have presented since a reduction in the percentage polled of one party must appear as an increase in the percentages polled of other parties.

On top of all this the first-past-the-post electoral systems means that the overall result in terms of seats in parliament is not simply related to the percentage of votes cast. 

Occupations of MPs

Ever alert to the possibility of finding some data to play with I was interested in an article in the Times regarding the number of MP’s with scientific backgrounds in parliament. First I found data on occupations in the population as a whole here (Office of National Statistics) and data on MP’s here, published by parliament. I thought it would be interesting to compare the two sets of figures, this turns out to be rather difficult because they define occupations very differently so I had to do a bit of playing about to get them into roughly comparable form.

This is what I came up with in the end:

It’s a “representation factor”, that’s to say I take the fraction of MP’s in parliament having a particular occupation and I divide it by the fraction of that occupation in the general population. If that occupation is over-represented in parliament then the number is bigger than one and if they are under-represented then it’s smaller than one. It would seem barristers, journalists and career politicians are massively over-represented. Lecturers, civil servants and teachers are a little over-represented. Business people are about as expected and doctors are under-represented (along with manual workers and white collar workers).
I think from all of this the figure on doctors is the most surprising. It does make you wonder about how useful the outside interests of MP’s are in guiding their deliberations since most occupations are grossly under-represented. You shouldn’t really expect to see the House of Commons faithfully representing the overall working population, but I expected the balance amongst professionals to be a bit more uniform.
The House of Commons library document on “Social background of MPs” from which I got the MP occupation data is rather interesting, in particular the age profile (table 1) appears to be shifting upwards despite the greater youth of the party leaders. The educational background (table 7) is quite striking too.
One of the glories of the internet is that data monkeys like me can find tasty fruit to pick and consume.

Book review: Joseph Banks by Patrick O’Brian

Once again I venture into my own idiosyncratic version of the book review: more reading notes than review. This time I’m reading the biography of Joseph Banks by Patrick O’Brian. Joseph Banks has popped up regularly in my recent reading about the Royal Society and the Age of Wonder. He was on Captain Cooks trip to Tahiti, and then went on to serve as President of the Royal Society for 42 years – the longest term of any President. The Inelegant Gardener has been reading about Kew and various plant hunters, and Sir Joseph crops up there too. Despite his many talents, there are relatively few biographies of Banks, and he is relatively unknown.

Sir Joseph was born of a wealthy family from Lincolnshire, he was educated at Harrow, Eton and then Oxford University. At some point in his school years he became passionately interested in botany, and whilst at Oxford he went to the lengths of recruiting a botany lecturer from Cambridge University to teach him. The lecturer was Daniel Solander, a very talented student of Carl Linnaeus, who would later accompany Banks on his trip around the world with Captain Cook, they would remain close friends until Solanders death in 1782.

Sir Joseph’s first trip abroad was to Newfoundland and Labrador in 1766. The area had been ceded to Britain by France, but there was an international fleet of fishing boats operating in it’s waters. Banks made his trip as a guest Constantine John Phipps on HMS Niger, which was sent to the area to keep an eye on things. It seems fairly common for gentleman to travel as guests on navy ships of the time: this was broadly the scheme by which Charles Darwin would later join HMS Beagle on his trip around the world.

1768-1771 finds Banks circumnavigating the world on Captain James Cook’s ship, HMS Endeavour, in Cook’s first such expedition. This voyage was funded by George III following an appeal from the Royal Society for a mission to Tahiti in order to observe the transit of Venus. Banks paid for the contingent of naturalists from his own funds. The stay in Tahiti is much written about largely, I suspect, because they remained there some time. Following their stay in Tahiti, they continued on to New Zealand, which they sailed around rather thoroughly but seemed to land on infrequently as a result of hostile responses from the inhabitants. They then sailed along the East coast of Australia, stopping off on the way at various locations but most particularly Botany Bay. At the time the the existence of Australia was somewhat uncertain in European minds. There’s a rather fine map of their course here and Banks’ journals are available here.

Through the chapters on both these voyages, O’Brian makes heavy use of the diaries of Banks, quoting from them extensively and often between block quotes further quoting Banks’ own words. This may work well for those of a more historical bent, but I felt the need for more interpretation and context. It often feels that O’Brian is more interested in the boats than the botany.

The next episode is somewhat odd: Banks was planning a second trip around the world with Captain Cook but he never went. At almost the last minute he withdrew on the grounds that the Admiralty would not provide adequate accommodation for him and his team scientists. The odd thing is that, despite what appears a fractious falling out, Banks appeared to remain very good friends with both Cook and Lord Sandwich, First Lord of the Admirality at the time. I wonder whether Banks, remembering the 50% mortality rate of his previous voyage with Cook, understandably got cold feet. As a consolation he went off to Iceland in 1772 for a little light botanising, where he scaled Hekla.

Despite recording an extensive journal, collecting a considerable number of anthropological, botanical and zoological specimens as well as a large number of drawings by his naturalist team Banks never published a full report of his Tahiti voyage. He showed the artefacts at his home in Soho Square and prepared a substantial manuscript, with many fine plates but seems to have lost interest in publishing close to the end of the exercise. Throughout his life he produced relatively few publications, this may be a reflection of his dilettante nature: he was skilled in many areas but not deeply expert and so published relatively little.

Banks was elected to the Royal Society whilst on his world tour, and later become President for a 42 year term, until his death in 1820. He made some effort to improve the election procedures of the Society, at the time of his election being in the right social class appeared to be more important than being a scientist. As part of his role as President he was heavily involved in providing advice to government including a proposal to use Australia as a colony for convicts. He was also heavily involved in arranging the return of scientists and others caught up in the wars following the French revolution. In addition to his work at the Royal Society, he also helped found the Africa Association and the Royal Academy.

Kew gardens was created a few years before Joseph Banks became it’s unofficial superintendent (in around 1773) and then director. He had a pivotal role in building the collection: commissioning plant collectors to travel the world, all backed by George III. I must admit that my recent reading has led me to see George III in a new light: as an enthusiastic supporter of scientific enterprises, rather than a mad-man. George III and Banks also collaborated on a programme to introduce merino sheep from Spain, which had potentially huge commercial implications. Banks was seen as a loyal courtier.

Through his life it’s estimated that Banks wrote an average of 50 letters per week almost entirely in his own hand, although they were fantastically well organised during his life, on his death they were rather poorly treated and dispersed. Warren R. Dawson produced a calendar of the remaining correspondence. I’ve not found this resource online but a treatment like this Republic of Letters would be fantastic.

I suspect a comprehensive biography of Joseph Banks is exceedingly difficult to write; this one seemed to cover voyaging well but I felt was lacking in botany and his scientific activities at the Royal Society. Perhaps the answer is that a comprehensive biography is impossible, since he had interests and substantial impacts in so many areas. There was simply no end to his talents!

Footnote
In the style of a school project I have made a Google Map with some key locations in Joseph Banks’ life.

Lasers go oooooommmmmmm

In a previous post I mentioned, in passing, surface quasi-elastic light scattering (SQELS). SQELS is a fancy way of measuring the surface tension of a liquid using light, it has some advantages over the alternative method: sticking something into the surface of the liquid and measuring the pull but it is technically challenging to do.

The basic idea of SQELS is this: if you take a liquid surface, even in the absence of breezes or shakes, it is perturbed by tiny waves the properties of which tell you about the surface properties of the liquid. These waves have frequencies of 10kHz, wavelengths of 0.1mm and amplitudes of only a few angstroms. They are driven by the thermal motion that means everything, on a small enough scale, is jiggling away incessantly. To measure these waves laser light is shone on the surface, most of the light is scattered elastically that’s to say it stays exactly the same colour. However, some of the light is scattered inelastically (or quasi-elastically since the effect is small) – it changes colour slightly, the power spectrum of the surface waves is imprinted onto the laser light in terms of shifts in its colour. So all we need to do is measure the power spectrum of the light reflected from the surface to find out about the surface properties of the liquid.

It turns out I don’t have any photos of the SQELS apparatus in all its glory, so I shall describe it in words. The whole thing is found on an 8 foot by 4 foot by 1 foot thick optical table. A fine, very solid table whose top surface is a sheet of brushed steel, pierced by a grid of threaded holes 25mm apart. The apparatus is in the form of a large U covering two long sides and one short side of the table. A laser is bolted at the start of the U; light heads from the laser along the table through a set of lenses, polarisers, a diffraction grating, then upwards through a periscope before being directed down onto the liquid in Langmuir trough. The Langmuir trough is protected by a cardboard box, decorated in the style of a Freisian cow with holes cut roughly in the sides to allow light in and out. Captured after reflection from the surface of the liquid, the laser light is directed back down to the table surface by a second periscope from where it passes back along the long side of the table into a photomultiplier tube – the detector.

The cardboard box is there to stop air currents disturbing the surface of the liquid, vibration is the enemy for this experiment because the liquid in the Langmuir trough picks up the slightest disturbance and wobbles around. Sitting on an optical table weighing a large fraction of a tonne isn’t enough – it needs to be on the ground floor too because buildings wobble and in this instance the Langmuir trough sat on an active anti-vibration table – a bit like noise cancellation headphones but the size of a small coffee table. You can manage without active anti-vibration if you’re willing to do your experiments in the dead of night.

The cardboard box is emblematic of a piece of research apparatus: much of it is constructed from pre-fabricated components, some of it is custom-made in the departmental workshop but then there are the finishing touches that depend on your ingenuity and black masking tape. I did have plans to get the cardboard box remade in perspex but the box was just the right size and if I wanted more holes in it I could easily cut them with a knife so it was never worth the effort. I seem to remember a bit of drainpipe being involved too. As an experimental scientist you get your eye tuned in to spot things just right to add to your apparatus.

The laser is a single mode solid-state laser producing light of 532nm wavelength – a brilliant green colour. Three things are important about lasers: firstly, they are fantastically bright; secondly, they produce light of a very pure colour – a single wavelength. Thirdly, lasers go “oooooooommmmmmmmm”, whilst conventional light-sources go “pip-pip—-pip-pip—pip”. Technically this is described as “coherence”, we’re using a laser in part because we want something to compare against and a conventional light-source isn’t going to work for this. If you’re measuring a small change, it’s very handy to have a “ruler” close at hand, and in this case the elastically scattered light is that ruler.

You’ll notice that I’ve not said anything about the results we obtained using the SQELS; truth be told, despite all the hours spent building the apparatus, doing the experiments and analysing the data the results we obtained told us little more than that which we could get by easier and simpler means that I described in my earlier post. I also had the sneaking suspicion that it would have helped if I knew more about optical engineering.

(I got distracted in the middle of this post, browsing through the Newport optical components catalogue site!)

Reference
Cicuta, P., and I. Hopkinson. “Studies of a weak polyampholyte at the air-buffer interface: The effect of varying pH and ionic strength.” Journal of Chemical Physics 114(19), 2001, 8659-8670. (pdf)