May you live in interesting times…

It turns out that a chunk of my audience for my last blog post were my colleagues at work, they thought it a bit of a gloomy rant. These days I’m a bit more perky: in contrast to every election since 1974, this time the Liberal Democrats (my party) have something to be cheery about following the despair of election night! Usually post-election we are most definitely not in government, returning wearily to our constituencies to prepare for more time in opposition. This time it’s different!

Watching the comments on twitter as events have unfolded has raised a few questions, and clear misconceptions which I thought might try to address from my point of view as a long (21 years) term party member.

What are the Liberal Democrats?
Some Liberal Democrats were carried over from the old style Liberals, some Liberal Democrats split off from the Labour party as the Social Democrat Party, since 1988 they were simply Liberal Democrats. I’ve always been a Liberal Democrat but my political origins are probably closer to the soft-right of the Tory party. I’ve never been tribal Labour (or Tory) for that matter. It’s fair to say that the majority of the Liberal Democrats are left of centre, but we’re in the party for a reason – we don’t want to be in any other party.

What is coalition government?
The way people talk you might get the impression that the Liberal Democrats in coalition would simply be there to prop up their coalition partners. Labour seem to view this almost as a right, that the Liberal Democrats are a little turbocharger for those elections where they didn’t quite win in their own right. Consequently they believe that a LibCon coalition would simply prop up a Tory government with a Tory agenda. This misses the point of coalition entirely, why on earth would we sign up to such a deal? The point of coalition is to get at least some of your agenda implemented, if you’re not in the governing coalition then none of your agenda is implemented.

Proportional Representation
A lot of the discussion at the moment is around proportional representation, personally I think it should be around the economy first: massive deficits don’t get reduced by themselves. I don’t intend to discuss proportional representation properly here, but simply highlight three systems:
The pure Alternative Vote system is the one proposed by those that don’t actually want proportional representation, it doesn’t actually provide a proportional output. The Jenkins Commission, set up by the Labour government following the 1997 election, recommended Alternative Vote plus top up (AV+). In AV+ there are constituency elections with a top-up from party lists that provides proportionality, the benefit here is that there are still relatively small constituencies. The output should be pretty proportional. The Electoral Reform Society prefers Single Transferable Vote, this provides broadly proportional output, but requires the use of large constituencies to work.

Labour and Proportional Representation
Labour’s new-found enthusiasm for proportional representation leads to hollow laughter amongst Liberal Democrats. For why? Go have a look at the evolution of the Labour commitment to a referendum. Basically a referendum was promised at the 1997 election, this referendum never happened and although it remained in the manifesto for subsequent elections the commitment became ever weaker. You can see why Liberal Democrats don’t trust Labour on proportional representation.

I’d like to present a slightly heretical opinion for a Liberal Democrat: an absence of a commitment for a referendum on proportional representation should not be a deal breaker. My reasoning: I don’t believe either Tory or Labour party could currently deliver a majority in parliament for such a referendum. It is possible that a referendum would not require a parliamentary vote, but let’s assume it does. A commission on electoral reform means that at least the Tories will have to start thinking about it on their own terms, something they haven’t been doing, even if it is a self-evident kick into the long grass. The next time there’s a hung parliament we will then have fruitless electoral reform documents from both Labour and Tory parties, but here’s the good thing: that means that they can’t really ask for another one. Furthermore there appears to be a groundswell of opinion in favour of electoral reform, and I don’t think it’s party political. Over the coming parliament, and at the next election I really hope this groundswell is directed into contact with politicians, we shouldn’t be hearing “This isn’t an issue on the doorstep” next time.

Under proportional representation coalition government is likely to become a fact of life so a successful Lib-Con coalition in the absence of a deal on PR would be worth having. I must admit the green shoots of coalition are promising. Rather than a pointless exercise in taking chunks out of each other we are starting to see politicians talk about what they agree on.

In a way we have nothing to lose, what’s the worst that can happen? Things fall apart and an election is called where we lose some percentage share of the vote leading to a reduction in seats – unpredictably fewer due to first-past-the-post system. We’d still be an opposition party with little power in parliament, so in a place broadly similar to the one we found ourselves in before this election campaign. What’s different now is that there is a broader movement for electoral reform, that may be the thing we won at this election.

In posting this now (5:30pm on Monday 10th May) I am very aware that I may be overtaken by events!

Footnotes

I was up for Evan Harris

This is a graph that shows you the number of seats (actual seats) each of the three main parties will get*, and the number of seats (proportional seats) they would get under a pure proportional system. You notice for the Labour and Conservative parties the number of seats they actually get is more than the number of seats in proportion to their votes, for the Liberal Democrats the opposite is true and by a very substantial margin.

When Liberal Democrats went into the polling stations yesterday they were given a single polling card, their Labour and Conservative comrades had three. Look them in the eye, ask them:

What is it about you that makes your vote three times more powerful, three times heavier, three times more important than mine?

What is special about you but not about me?

Explain to me how this is fair.

Explain to me how this is democracy.

To put it another way, every Labour or Conservative seat requires about 33,000 votes to win, a Liberal Democrat one requires 100,000 votes. We are the Great Ignored.

We have come to accept this inequity, it’s happened in every election since the early 80’s. As a country we just accept it as part of the way things are. It’s the defining feeling of being a Liberal Democrat, seeing the overall share of our vote creep up election by election and receiving the same feeble, disproportionate harvest in seats. The sinking feeling in the middle of the night that, no, of course there has been no breakthrough. It’s not because we perform poorly, it is because we have one polling card each, the others have three.

In 1997 the defining moment was Michael Portillo losing his seat to Stephen Twigg. My defining moment for this election was seeing Evan Harris lose his Oxford West and Abingdon seat. “I was up for Evan Harris”, I had a tear in my eye.

Footnote
*This is based on the exit poll (see entry at 23:11), which looks consistent with the results of the actual election as of 10:30am May 7th which are Conservative 291, Labour 247, Liberal Democrat 51 616 of 650 seats declared. Under pure proportionality UKIP would receive 20 seats, the BNP 12.

No sleep ’til Batley!

I’m planning on staying up late tonight, watching the results of the general election come in, this is an occasion for a graph. The chaps at tweetminster have uploaded a list of predicted declaration times here. I’ve rearranged the data a bit to plot it, the height of each bar tells you the number of constituencies declaring during the hour starting at the time indicated at the bottom of the bar. As you can see, things don’t really get going until about 2am. Key times for me are the declaration in my own constituency, City of Chester, at 3am and Dr Evan Harris’ Oxford West and Abingdon at 2:30am. Batley & Spen declares at around 5am hence the title of this blog post.

This has been the most exciting election campaign, and election night, in quite sometime. I spent the 1997 election at a friends house in Darlington, I remember stumbling out into the early morning with “Things can only get better” ringing in my ears. For a few years that seemed to be the case. 1992 was interesting in that we all thought John Major was going to lose, and then he won to the surprise of everyone (including John Major). 2001 and 2005 were rather dull.

As a seasoned Liberal Democrat I’m used to my party getting pretty good percentage poll scores overall and winning pitiably few seats, so to the newcomers out there – welcome to my world! I can only hope that this time things will be different.

Economics: The physics of money?

Today I’m off visiting the economists, this is a bit of a different sort of visit since I haven’t found that many to follow on twitter, instead I must rely on their writings.

I’ve been reading Tim Harford’s “The Undercover Economist” which is the main topic of this post, in the past I’ve also read “Freakonomics” by Levitt and Dubner. Harford’s book is more about classical economics whilst “Freakonomics” is more about the application of quantitative methods to the analysis of social data. This is happy territory for a physicist such as myself: there are numbers, there are graphs and there are mathematical models.

David Ricardo pops up a few times, it would seem fair to compare him to the Newton of economics, he lived 1772-1823.

I learnt a whole bunch of things from Tim Harford’s book, including what shops are up to: working out how to persuade everyone to pay as much as they are willing to pay, by means such as “Value” and “Finest” ranges whose price differences don’t reflect their cost differences, similar pricing regimes are found in fancy coffee. In a way income tax bypasses this, it replaces willingness to pay with ability to pay – I’m sure shops would love to be able to do this! Scarcity power allows a company to change more for its goods or services, and a company’s profits are indication that this might be happening.

Another important concept is market “efficiency”: perfect efficiency is achieved when no-one can be made better off without someone else losing out, this is not the same as fairness. In theory a properly operating market should be efficient but not necessarily fair. Externalities are the things outside the market to which a monetary value needs to be attached in order for them to be included in the efficiency calculation, this includes things like pollution and congestion in the case of traffic. This sounds rather open-ended since I imagine externality costing can be extremely disputed.

There’s an interesting section on inside / asymmetric information, and how this prevents markets from operating properly. The two examples cited are second-hand car sales and health insurance, in the first case the seller knows the quality of the car he his selling whilst the buyer struggles to get this information. Under these circumstances the market struggles to operate efficiently because the buyer doesn’t know whether he is buying a ‘peach’ (a good car) or a ‘lemon’ (a bad car), this reduces the amount he is willing to pay – the seller struggles to find a mechanism to transmit trusted quality information to the buyer. Work on information asymmetry won a Nobel Prize for Economics for George Akerlof, Michael Spence, and Joseph Stiglitz in 2001.

In the second case, health insurance, the buyer purportedly knows the risk they present whilst the seller doesn’t, this doesn’t quite ring true to me, it seems the observed behaviour in the US private healthcare system matches this model though. In a private insurance system the people who are well (and are likely to remain well) will not buy insurance, whilst those that believe themselves to be ill, or at serious risk of being ill will be offered expensive insurance because there is not a large population of healthy buyers to support them. Harford recommends the Singapore model for health care, which has compulsory saving for health care costs, price controls and universal insurance for very high payouts. This gives the consumer some interest in making most efficient use of the money they have available for health care.

You might recall the recent auctions of radio spectrum for mobile phone and other applications, this turns out to be a fraught process for the organiser – in the US and New Zealand this process went poorly with the government receiving few bids and less cash then they expected. In the UK the process went very well for the government, essentially through a well designed auction system. The theoretical basis for such auctions is in game theory, with John von Neumann and John Nash important players in the field (both recognised as outstanding mathematicians).

Tim Harford did wind me up a bit in this book, repeatedly referring to the market as “the world of truth”, and taxes as “lies”. This is a straightforward bit of framing: that’s to say the language used means anyone arguing against him is automatically in the “arguing against the truth” camp irrespective of the validity of the arguments. The formulation that taxes represent information loss is rather more interesting and he seems to stick with this more often than not. In this instance I feel the “world of truth” is ever so slightly tongue in cheek, but in the real world free-markets are treated very much as a holy “world of truth” by some political factions with little regard to the downsides: such as a complete ignorance of fairness, the problems of inside information and the correct costing of externalities.

A not inconsiderable number of physicists end up doing something in finance or economics: As Tom Lehrer says in the preamble to “In old Mexico”: “He soon became a specialist, specializing in diseases of the rich”. It turns out you get paid more if the numbers you’re fiddling with represent money, rather than the momentum of an atom. Looking at these descriptions of economic models, I can’t help thinking of toy physics models which assume no friction, and are at equilibrium. These things are very useful when building understanding, but for practical applications they are inadequate. Presumably more sophisticated economic models take this things into account. From a more physical point of view, it doesn’t seem unreasonable to model economics through concepts such as conservation (of cash) and equilibrium, but physics doesn’t have to concern itself with self-awareness – i.e. physical systems can’t act wilfully once given knowledge of a model of their behaviour. I guess this is where game theory comes in.

The interesting question is whether I should see economics as a science, like physics, which is used by politicians for their own ends or whether I should see them as being rather more on the inside. Economics as a whole seems to be tied up with political philosophy. Observing economists in the media there seem to be much wider range of what is considered possibly correct than you observe in scientific discussion.

Opinion polls and experimental errors

I thought I might make a short post about opinion polls, since there’s a lot of them about at the moment, but also because they provide an opportunity to explain experimental errors – of interest to most scientists.

I can’t claim great expertise in this area, physicists tend not to do a great deal of statistics unless you count statistical mechanics which is a different kettle of fish to opinion polling. Really you need a biologist or a consumer studies person. Physicists are all very familiar with experimental error, in a statistical sense rather than the “oh bollocks I just plugged my 110 volt device into a 240 volt power supply” or “I’ve dropped the delicate critical component of my experiment onto the unyielding floor of my lab” sense. 
There are two sorts of error in the statistical sense: “random error” and “systematic error”. Let’s imagine I’m measuring the height of a group of people, to make my measurement easier I’ve made them all stand in a small trench, whose depth I believe I know. I take measurements of the height of each person as best I can but some of them have poor posture and some of them have bouffant hair so getting a true measure of their height is a bit difficult: if I were to measure the same person ten times I’d come out with ten slightly different answers. This bit is the random error.

To find out everybody’s true height I also need to add the depth of the trench to each measurement, I may have made an error here though – perhaps a boiled sweet was stuck to the end of my ruler when I measured the depth of the trench. In this case my mistake is added to all of my other results and is called a systematic error. 

This leads to a technical usage of the words “precision” and “accuracy”. Reducing random error leads to better precision, reducing systematic error leads to better accuracy.
This relates to opinion polling: I want to know the result of the election in advance, one way to do this would be to get everyone who was going to vote to tell me in advance what their voting intentions. This would be fairly accurate, but utterly impractical. So I must resort to “sampling”: asking a subset of the total voting population how they are going to vote and then by a cunning system of extrapolation working out how everybody’s going to vote. The size of the electorate is about  45million, the size of a typical sampling poll is around 1000. That’s to say one person in a poll represents 45,000 people in a real election.
To get this to work you need to know about the “demographics” of your sample and the group you’re trying to measure. Demographics is stuff like age, sex, occupation, newspaper readership and so forth – all things that might influence the voting intentions of a group. Ideally you want the demographics of your sample to be the same as the demographics of the whole voting population, if they’re not the same you will apply “weightings” to the results of your poll to adjust for the different demographics. You will, of course, try to get the right demographics in the sample, but people may not answer the phone or you might struggle to find the right sort of person in the short time you have available.The problem is you don’t know for certain what demographic variables are important in determining the voting intentions of a person. This is a source of systematic error, and some embarrassment for pollsters. 
Although the voting intentions of the whole population may be very definite (and even that’s not likely to be the case), my sampling of that population is subject to random error. You can improve your random error by increasing the number of people you sample but the statistics are against you because the improvement in error goes as one over the square root of the sample size. That’s to say a sample which is 100 times bigger only gives you 10 times better precision. The systematic error arises from the weightings, problems with systematic errors are difficult to track down in polling as in science.
So after this lengthy preamble I come to the decoration in my post, a graph: This is a representation of a recent opinion poll result shown in the form of probability density distributions, the area under each curve (or part of each curves) indicates the probability that the voting intention lies in that range. The data shown is from the YouGov poll published on 27th April. The full report on the poll is here, you can find the weighting they applied on the back page of the report. The “margin of error” of which you very occasionally hear talk gives you a measure of the width of these distributions (I assumed 3% in this case, since I couldn’t find it in the report), the horizontal location of the middle of each peak tells you the most likely result for that party.

For the Conservatives I have indicated the position of the margin of error, the polling organisation believe that the result lies in the range indicated by the double headed arrow with 95% probability. However there is a 5% chance (1 in 20) that it lies outside this range. This poll shows that the Labour and Liberal Democrat votes are effectively too close to call and the overlap with with the Conservative peak indicates some chance that they do not truly lead the other two parties. And this is without considering any systematic error. For an example of systematic error causing problems for pollsters see these wikipedia article on The Shy Tory Factor.

Actually for these data it isn’t quite as simple as I have presented since a reduction in the percentage polled of one party must appear as an increase in the percentages polled of other parties.

On top of all this the first-past-the-post electoral systems means that the overall result in terms of seats in parliament is not simply related to the percentage of votes cast.