Published 8th June 2017 (day of the UK General Election), by Johnny Phillips.
We go to the polls again today for our fourth national vote in as many years (I live in Scotland, so I had the 2014 Independence Referendum as well as the 2015 General Election and last year’s Brexit vote).
Starting as soon as Theresa May announced the snap election – and then running as a constant theme throughout the campaign – there has been plenty of ridiculing of opinion polls as predictors of elections. These forecasts have still commanded plenty of column inches, but are now treated with a deal more skepticism/derision in the wake of recent polling misses.
My day job involves using data to model future events, mostly sports matches and competitions, by building predictive models to make sense of data; using numbers and logic to figure out how likely things are to happen. The output of my models get used by high-rolling professional gamblers who exploit the information to make money in betting markets.
So from the perspective of a pro sports gambler/modeller, here are some thoughts on elections, opinion polls, election models – and a pet theory about the ‘invisible’ force that ultimately decides the result of many election.
Opinion Polls Are Rubbish
The reputation of opinion pollsters and their polls has never been lower. At the 2015 UK General Election they told us it was going to be a hung parliament. They said Britain would say NO to Brexit. And then there was Donald Trump; Hillary was a certainty, they said. Wrong, wrong, wrong. Pollsters are rubbish and opinion polls are worthless.
That’s the popular narrative anyway. The truth, however, is a bit more nuanced. Firstly, you can only think that polls are wrong if you take them to be predictions of what will happen. Which you shouldn’t. They are just bits of data, evidence generated using a flawed methodology – asking a small number of people their voting intention and then trying to extrapolate from that how everyone will vote. No matter what fancy polling techniques you use, polls will never be better than a rough guess and a guide.
Election pollsters face the toughest fundamental issue for any data analyst; the absence of a large and relevant sample of data. So the wonder really is not that opinion polls are so often wrong, it’s that they are so often pretty good. And they HAVE actually done a lot better in recent elections that people give them credit for.
They gave Hillary Clinton about a 3% to 4% point lead on the eve of the 2016 US Presidential Election, and she ended up winning the popular vote by about 2% points (but lost it in the Electoral College).
They had Brexit pretty close, which it was. It just happened that the small-ish 4% error between polls and the actual vote crossed the 50% line. If they had predicted 52% Leave and the final result was 56% then there would have been virtually no fuss. But that’s illogical to me, because it’s the exact same size of polling miss.
The key thing is; don’t take opinion polls literally. An opinion poll is just a piece of evidence, something to help you form a clearer picture. As with a psychological profile in a murder case, polling data can be useful, but it’s never definitive.
Like a good detective who uses a psychological profile as one part of the process of conducting a thorough investigation, the people who can talk most expertly about a vote are modellers who sift and weigh all the evidence on an election and create a model to calculate the probabilities of all the possible outcomes. I must get round to doing that for the next election. But in the meantime, listen to Nate Silver. He’s ‘one of us’.
Building A Good Model To Price An Election
Building a model to work out probabilities for a sport like football is way easier than an election, mainly because we get to play with so much more quality data. Football teams play dozens of matches a year, so we get to watch/analyse/rate them multiple times. Elections are rare. Lots of things happen in between them (‘events, dear boy, events’ as Harold Macmillan wistfully noted) that mean the result of the previous election is not nearly as good a guide to how well a party/candidate will perform in the next, as a team’s previous few matches is in sport.
So election modellers have to fill in the gaps with the next best thing they’ve got – opinion polls. But polls aren’t the real thing. So it’s like building a football model using the performances of the teams in their training sessions, rather than matches. It’s not a terrible way to do it – you can glean plenty from watching players and teams practicing. But if that’s the base on which you are building your model then the single most important factor in model-building becomes ever more important; humility.
The trap that ‘experts’, ‘gurus’ and maths ‘geniuses’ fall into time and again is to get carried away with their own brilliance. Most of these guys were great at Maths at school so they are used to getting sums right. They get lulled into thinking that there is a ‘correct’ answer to everything, and that they are better than everyone else at finding it. They get so caught up in the wonder of the complexity of the models they have created, and the processing power of the super-computer they are using, that they lose sight of the realities of the the real world. An election, like a football match, is a living, breathing, infinitely complex thing that is influenced by the unfathomable workings of human minds.
You can’t predict it. The best you can do is be smart (and humble), and make the best sense you can of the data and of history, to come up with a best guess at how low likely things are to happen. An expression of probabilities, not a prediction. Professional gamblers profit not because they make better predictions than everyone else, but because they don’t make predictions. They acknowledge the inherent randomness of (virtually) everything in our universe, and look for ‘value’ – they only bet when the odds they can get are bigger than they should be.
The Nate Silver Paradox
Nate Silver is the founder of the excellent 538.com website. He got famous for predicting correctly the results of all 50 States in the 2012 US General Election. A couple of years ago I wrote about the ‘Nate Silver Paradox’, and in the November 2016 US General Election this paradox jumped up and bit Nate on the ass.
The paradox is that as an empiricist and ‘fox’ thinker, who believes in talking/thinking/modelling probabilistically (just like me and my clients, and all successful professional gamblers, bookmakers and investors) Nate doesn’t ‘make predictions’ – like us he deals in ‘probabilities’.
But he got famous for correctly ‘predicting’ the results of all the state votes in the 2008 election. This is the paradox. The very thing that made him famous is essentially the opposite of what he does.
In 2012 the party which Nate said was more likely to win, did indeed win in each state. But he’d be the first to admit there was plenty of luck involved in getting the full house.
Probabilistic thinkers like Nate generate expressions of probability and then act according to them. In his case he writes articles and talks about how likely various eventualities are to occur. For bookmakers this means looking at something like a horse race and asking not ‘which one do I think will win?’, but ‘what is the % chance of each of them winning?’. A professional gambler does exactly the same thing, but just acts differently with the output of his calculations – rather than offering odds on all the horses, he bets on any horse where the odds he can get are greater than his own estimation of the probability of it winning.
The bitter irony of the 2016 US Election for Nate is that in the way professional gamblers view the world, he did a brilliant job. His model essentially gave Trump a 30% chance of winning on the eve of the election, whereas bookmakers had him at about 15%. And other modellers/gurus/experts had him as low as 1%.
So even though Nate though Hillary was more likely to win, if he was operating within a professional betting syndicate he would have taken a large trading position on Trump winning. Because as pro gamblers, all we care about is the difference between the true chance of something happening, and the the odds we can get. A difference as big as 15% is virtually unheard of in sports betting. That’s maximum bet material.
So in a betting syndicate Nate would have been a hero after Trump’s win. Back in the ‘real’ world, where most people only care about the headline and the ‘prediction’ – he was a chump. The whizz-kid guru got this one wrong. It’s like if I give my mates in the pub a tip for a horse at 33/1, and it ends up getting beat a nose into second. ‘Didn’t win. Rubbish tip!’. Ungrateful bastards. The Nate Silver Paradox sucks.
The Wisdom of Crowds
At village fairs you will often see a stall with a huge jar full of smarties. The challenge is to guess how many smarties are in the jar; closest guess wins them. Over the course of the day loads of people guess. Some individual guesses will be out by hundreds, even thousands. It’s likely nobody will guess the right number exactly. But if you calculate the average of all the guesses then something pretty weird & wonderful happens – you will find that the average represents a pretty damn fine guess of the actual number of sweets.
You’d be better off going with this average of all the guesses, than the guess of even the very smartest single person who looks at the jar and makes a prediction. The ‘jar of sweets’ is an example of the ‘wisdom of crowds;’ aka ‘the efficiency of markets’, or ‘the power of thinking without thinking’. Not every person in a crowd needs to be wise for the crowd to have wisdom.
This form of ‘wisdom’ certainly has limits, but it’s valuable practical applications can perhaps be best seen when interpreting betting markets. Can betting markets be used to predict elections better than opinion polls? Maybe. In fact almost certainly, yes. But you need to know what you are doing.
In my day job, one of the most powerful types of model I run are the ones which use the betting prices from the Asian market on football to capture the ‘market opinion’ on teams and players. On big leagues like the English Premier League the crowd is very wise indeed. At kick off in an English top-flight game the consensus market opinion is the best guide to the true chances of either team winning that exists.
It’s like if the fair offers a £100k prize for the closest guess of the number of smarties in the jar. The prize naturally attracts more interest, including ‘professional smartie-counters’ to visit the stall. And they enter multiple guesses, rather than just one like most ordinary visitors. The professionals also employ analysts who study the height and depth of the jar, and run computer models to guide their guesses. These pros still virtually never get the exact number correct, but they do much better on average than the amateurs who are judging by their eye. So the collective influence of these pros has the effect of making the average even better. A truly ‘wise’ crowd.
But betting markets on elections are different. It’s still like guessing sweets in a jar, but there aren’t any professional counters this time. There’s no big cash prize on offer, none of the people who are guessing have much/any experience of sweet-in-jar counting, and they don’t have any jar-quants crunching data for them. Everyone is just looking and guessing, or putting their faith in the crazy old fortune-telling lady in the tent opposite. So an election-style sweet-guessing crowd has some wisdom, but not nearly as much as a Premier League-style crowd partly populated by pros.
So be wary of anyone telling you what ‘the betting market thinks’ will happen in an election. Election markets don’t ‘think’ with a single mind the way football markets now do, and they aren’t nearly as smart anyway.
The Invisible Hand
Can’t we use the Wisdom of Crowds with opinion polls; simply take the average of the results of all the polls to give us a reliable average answer, at least for the ‘horse race’ numbers? Up to a point, the answer is yes – looking at averages of poll results is better than focusing on any one individual poll.
But polls are like the guesses made just by looking at the jar – because it’s seriously tricky to get a representative sample of voters to respond to your poll. Modern technology actually makes this harder, not easier. Fewer people answer their landline phones these days. Some demographic groups are far more likely to respond to an online poll. And certain social groups will be much less likely to agree to be canvassed (and/or give honest answers if they do) which skews results. Polling is hard.
But also it’s a largely underestimated factor that plenty of people who end up casting a vote in an election don’t make their minds up who to vote for until late on – so polls have no chance to catch them. In the 2016 US Presidential Election there was an unusually large number of ‘undecideds’ in the final week of campaigning, with two unpopular candidates to choose from. Pollsters tend to gloss over undecideds because it dilutes the clarity of the narrative they are reporting. And modellers often effectively do the same by shrugging their shoulders and allocating them pro-rata among the different candidates, based on people who did respond. Net effect is pretty similar – late-deciders tend to get ignored.
This just seems wrong to me. So I have come up with a theory about the late-deciders and the way they vote. If the theory is correct then it explains a lot about polling errors and surprise results.
It’s called: The Invisible Hand Theory;
(Very) roughly speaking, the way General Elections in mature democracies work is that about 70% of the electorate make their minds up early, mostly on grounds of party affiliation. 20% are floating voters who can be convinced by the arguments put forward during the campaign. And 10% are late-deciders.
Late-deciders don’t pay much or any attention to politics generally. They come from the chunk of society who sometimes vote, but sometimes don’t bother. They make their decision about voting/not voting, and then who to vote for if they do go to their polling station more on ‘gut’ than some forensic and objective dissemination of the choices on the ballot. Plenty of them don’t make up their minds where to put their cross until they actually get the ballot paper in their hands. So these people evade even the most rigorous and extensive pre-election opinion polling (exit polls can be much more accurate). Basically, polls can’t know who they are going to vote for, because they don’t know who they will vote for.
Pollsters and election modellers generally deal with the problem of late-deciders by assuming they will ultimately break in the same proportions as the sample of people who did respond when polled. But what if that is wrong? What if a load of the late-deciders are prone to going in the same direction, when push comes to shove? What if there is an Invisible Hand that nudges them in a common direction?
Even though 10% isn’t a big number, a heavily skewed distribution among these guys is certainly enough to make the difference in lots of elections. Enough to give Donald Trump the edge in the Electoral College, enough to give the Tories a 2015 majority, and enough to get Brexit over the line.
The Invisible Hand theory states that a disproportionate number of late-deciding voters will eventually decide to go in the same direction because of reasons that are difficult/impossible for election analysts to get a handle on. The Invisible Hand influences the way that floating voters act too, but its effect on them is less potent because floaters will also react to more tangible election influencers, such as leaders’ debate performance or manifesto policy commitments. And the polls at least have a chance of picking up on the shifting intention of the floaters as they make up their minds in the final weeks of the campaign.
So what is the Invisible Hand? What is it that will make up the minds of people who don’t have any real interest in the detail of policy or political personalities?
The Invisible Hand is ultimately a feeling that we should ‘twist or stick’. ‘It’s time for a change’ versus ‘give the guy some more time’. The devil you know vs the devil you don’t.
And the three factors which decide which way the Hand will push in any given election are;
If the Economy is perceived to be healthy/booming then there’s a huge incentive for folk who otherwise aren’t too fussed either way to say ‘let’s not try to fix something that ain’t broken’. So they feel like they should probably stick with what they’ve got. On the other hand, if the economy is struggling there’s a natural nudge towards ‘might as well give the other guys a go’.
Perhaps the clearest example of this effect came at the end of the 2nd World War. Like most UK educated school-kids I grew up with the story of how Winston Churchill ‘won’ WW2 for Britain. He was the hero of the nation, hoisted high on the shoulders of an adoring public, coming out of the war with an 83% approval rating. So I still remember being flabbergasted when I learned that Churchill lost (in a landslide!) the UK General Election of 1945. How could this be?
The Invisible Hand theory would explain it by saying that although Churchill was personally popular, and the nation was collectively grateful to him – the reality of the post-war UK was of a nation paying a heavy economic price for the effort of waging war. Food shortages meant rationing, and jobs were scarce as the country struggled to get back on its feet. In this economic climate the Invisible Hand logic of ‘let’s give the other guys a go’ prevailed.
The influencing factor of Incumbency works to create a largely cyclical pattern of winners in democratic elections. The incumbents get the credit or blame for things that happen, regardless of whether they were really within their control. But some bad things are always happening, so the opposition gets to blame the incumbents for those, and to make promises about how they can change/fix them. Before long this message resonates, so prolonged streaks of the same party winning elections in mature democracies are incredibly rare. The Invisible Hand will soon enough guide in the direction of ‘time to give the other guys a shot’.
Tribalism is a more complex and ‘darker’ factor, as it works on prejudices and biases that we possess, often without understanding or acknowledging them. But there can’t be any denying the general rule that human beings are tribal creatures. Whether it’s political parties, sports teams, religions, nations or social classes – we have a tendency to clump ourselves together in ‘tribes’. Our tribal feelings are subject to flux, and so are open to be manipulated and exploited. In last year’s US General Election Donald Trump successfully hit on some tribal raw nerves with his demagoguery; ‘build a wall’ and ‘drain the swamp’.
Post-election analysis from US ‘16 matches the Invisible Hand theory neatly. The ‘tribe’ that got Trump over the line in the Electoral College was the less well-educated (‘non college degree educated white voters’) of the American ‘rust-belt’. They were still suffering more than most with the economic hangover from the 2008 financial crisis – certainly more than the swamp-dwellers of Wall St who had money aplenty to pay Hillary to come and give them a speech (despite being the ones who caused the crisis in the first place). Factories were closing, old industries declining or gone, jobs taken by immigrants…… Time for a change. Give the other guys a go.
The Democrats hadn’t cured the hangover in their preceding eight years in the White House. And the whole East Coast belt-way elite of politicians, Wall St bankers, lobbyists and special interests was out of touch with ‘real Americans’. The establishment tribe which Hillary Clinton represented was ripe for a single-fingered gesture by the Invisible Hand, and even a candidate as widely disliked and gaffe-prone as Donald Trump was able to benefit. Enough late-deciding Americans decided it was time to ‘twist’.
The Invisible Hand theory is a difficult one to prove (and of course could be total nonsense) because it is largely about how voters ‘feel’ about an election, more than what they ‘think’. And that’s a difficult thing to tease out of people.
The Invisible Hand is a decent explanation as to why polling in the UK has traditionally been much less accurate than polls in the USA. Elections in the States are naturally simpler because they (mostly) consist of a binary choice between two parties, whereas UK General Elections have multiple choices. So UK General Election modellers have the added complication of tactical voting to contend with. A staunch Labour voter could tell a pollster they will vote Red, but come the day vote Liberal Democrat in order to try to get the Tory beat.
But there could also be something to the idea that UK voters are just naturally more susceptible to the Invisible Hand as well. Engagement with politics is generally low and the vast majority of voters would have no idea about the manifesto positions of the parties. It’s a bit different in the States, where there are only two parties competing to get their message across, and the US system affords far greater scope, time, money and TV advertising to appeal to voters. So the number of UK voters who end up voting ‘on feel’ is greater, and the Invisible Hand has a greater influence. The average polling error is roughly double in the UK versus the States (roughly 4% +/- in the UK v 2% +/- in the US).
The 2017 UK General Election
So much for the theory. Today is election day (I’m writing this as polls open on morning of June 8th) – what does the Invisible Hand theory say will happen? For a guy who values data and hard facts, this is a little tricky because it requires a little bit of ‘feel’.
This one feels like a ‘stick’ election to me.
Incumbency and the Economy don’t seem like big factors this time around. Instead the Invisible Hand influence will come down to the issue of Tribalism. We currently have two tribes we’re gearing up to fight; the European bureaucrats we face in the Brexit negotiations. And the Islamic terrorists who are attacking our citizens. The nudge from the Invisible Hand in this election will come down to who the ‘gut’ voters feel is the best person/party to represent ‘us’ against ‘them’.
Normally it would be a no-brainer – right-wing Tories usually win that one hands-down here against a socialist option. But Theresa May has six years of cuts to policing budgets as Home Secretary against her name, plus a record of flip-flopping on Brexit. But even so, the bottom line is likely to be that – come the day – the Invisible Hand electorate will lean Tory, and give them a comfortable/landslide win.
The average of recent polls gives the Tories a 6.5% lead on Labour. My guess is that the Invisible Hand will top that up to nearer 10%, giving them around 375-400 (of the 650 total) seats in Parliament.
The narrative that will come out of the election is that Theresa May got lucky. Despite running one of the worst campaigns in living memory, she was saved because the alternative on offer to a Tory government – with a mandate to negotiate Brexit and to deal with terrorism – just wasn’t acceptable to huge swathes of the country. Like with Trump last year, she’s the less terrible of two bad choices.
Jeremy Corbyn has come out of the campaign with his reputation enhanced. Despite lacking charisma, and being a less than thrilling orator, he has at least appeared genuine and principled enough to stick to his beliefs, even the unpopular ones. In contrast Theresa May has come across as a political opportunist, willing to flip-flop on Brexit and u-turn on manifesto pledges for reasons of political expediency.
But ultimately, these things only count for so much. The reality is that no matter how genuine a candidate like Corbyn may be, and no matter how vociferously the voluble young voters who like his left-wing agenda support him – an elderly, bearded, left-wing pacifist just isn’t electable in modern day Britain.