BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

The Polls In Britain: What Went Wrong?

Following
This article is more than 8 years old.

On election morning in the United Kingdom, the highly regarded UK Polling Report had this to say about the preelection polls, “The polls are essentially showing a neck and neck race – they’ll either all be about right, or all be wrong.” On election night, when the GfK/IpsosMORI/network exit poll results were broadcast at 10 pm BST, the numbers seemed almost unbelievable. Former Liberal Party leader Paddy Ashdown told the BBC, “I’ll bet you my hat, eaten on your program, that [the exit poll] is wrong.” Well, the exit pollsters were almost spot on, although they underestimated the Conservative Party’s seat share.

Mounting an exit poll in the United Kingdom or the United States is an incredibly difficult exercise. It takes years of continuous preparation, in which pollsters and academics digest reams of historical data as well as current polling results, to figure out where to place the people who hand you a sample ballot at carefully selected polling places. Then, those results must be fed back to headquarters where they are tallied, studied by experts, and then reported. Although there is some dispute about exit poll parentage, the technique was invented nearly 50 years ago in the United States, when the late CBS polling expert Warren Mitofsky conducted an exit poll in a gubernatorial race. The exit polls have a pretty solid track record, but there have been a few big misses.

The problems of the preelection polls are a big story not only in the U.K., but also elsewhere. Most recently, in Israel, they failed to predict Benjamin Netanyahu’s victory. In the Scottish independence referendum, the pollsters predicted a close “yes/no” vote, yet in the end 55 percent of Scots voted no to 45 percent in favor.

Further back in time, in the British general election in 1992, the pollsters predicted a hung parliament, but John Major won a clear victory. Although it is early to speculate about exactly what happened this year, here are some possible explanations.

Today the polling industry everywhere faces unprecedented challenges. Response rates for the best-designed surveys in the United States are now below 10 percent. For a variety of reasons, people are more reluctant to take part in polls. Most people in the business think that response rates can’t go much lower than that and still a sample that looks like America. Far fewer households have landlines, and surveying cell phone respondents is difficult and expensive. In its description of its methodology, the Pew Research Center reports that they sample landlines and cellphones to get a combined sample with approximately 35 percent landline interviews and 65 percent cellphones.

The good news is that the pollsters are experimenting with other methodologies. The bad news is that there are problems creating reliable samples from some of the newer ones. Internet polls, for example, are self-selected, not random samples.

Another explanation is the “shy Tory” phenomenon. The idea that ordinary Tory voters might not tell the pollsters their true voting intention was one of the explanations for the pollsters’ poor results in 1992 when John Major’s victory came as a great surprise. The phenomenon is similar to the idea of a silent conservative majority in the U.S., and there has been evidence from recent national election polls here that the pollsters overestimate Democratic strength.

Are some people hiding their preference or lying to the pollsters? It’s possible. It’s also possible that people simply changed their minds as they walked or drove to their polling places and decided to stick with the Tories, although as some commentators suggested, their hearts may have been with Labour. Today there are so many polls and saturation media coverage as we get closer to election day. It’s possible that people are just tired of it and try to trip the pollsters up in any way they can.

Another possible explanation related to polls and the media is one I call it the “feedback loop.” People in the U.K. and the United States hear endlessly about how bad things are. This probably informs their thinking and creates a desire to take incumbents down a peg when they respond to early polls. But when it comes to actual voting, people may simply decide to stick with the tried and true, aware that there are both plusses and minuses to the incumbent and the challenger.

Eight days after the great U.S. polling debacle in 1948, the Social Science Research Council convened an expert panel to figure out what went wrong. Brits did the same thing in 1992. If there is to be a future for polling, let us hope the Brits do so once again.