What’s on your mind this weekend, Bleeding Heartland readers? This is an open thread.
I’ve been reading about opinion polls, and specifically, the polling industry’s growing challenge of sampling a group that looks like the electorate. Almost every day, a new poll appears on Iowa’s U.S. Senate race. Since last weekend’s Selzer poll for the Des Moines Register and Bloomberg News, which showed Ernst up by 1 point, three other polls have shown small leads for Ernst, while one showed Braley slightly ahead. How Iowa’s no-party voters are leaning is anyone’s guess; some polls have shown Ernst leading among independents, others have indicated that Braley is ahead.
All of these surveys are reporting results among “likely Iowa voters,” but which, if any, have correctly identified a representative sample? The statistical margin of error means little if the pollster is systematically oversampling certain voters while not reaching other groups. As Nate Silver discusses here, data since 1998 show that polls of U.S. Senate or gubernatorial races are less accurate than presidential polls.
Media orthodoxy says reporters and pollsters can never admit their own organization’s poll might have been an “outlier.” Rather, readers are told that all trends apparent from some group’s poll reflect real shifts of public opinion. So we get the Des Moines Register’s Jennifer Jacobs saying Braley “has begun to overcome some of the vulnerabilities detected in the Iowa Poll two weeks ago,” going from a double-digit deficit among independents to a slight lead, and going from 25 points down among male respondents to 16 points down. Really, is it likely Braley surged so much in two weeks, as opposed to the previous Des Moines Register/Selzer poll overstating Ernst’s advantage overall and among certain groups?
Similarly, Quinnipiac’s latest Iowa poll shows “independent voters backing Rep. Braley 48 – 43 percent, a shift from Ernst’s 50 – 43 percent lead among these key voters last month.” Did no-party voters really change their minds in large numbers over a few weeks, or did Quinnipiac’s findings change because of statistical noise?
After the jump I’ve posted excerpts from several articles about polling and some revealing comments by Ann Selzer, a guest on a recent edition of Iowa Public Television’s “Iowa Press” program.
From the October 10 edition of “Iowa Press”:
[Des Moines Register’s Kathie] Obradovich: So, Ann, you are always calling people, finding out if they’re going to vote or not. Do you get any sense of whether enthusiasm for this 2014 midterm is higher, lower, about the same as what you usually see in a midterm?
Selzer: You know, what has changed in our understanding of what the polls look like and what turnout looks like is that there’s a little less of a relationship between enthusiasm and turnout. And that is because so much money is now being pumped into campaigns and a lot of that money goes to identifying voters and getting them to vote, especially if you can get them to vote early, then you have already locked up that vote.
[Iowa Press host Dean] Borg: Are you saying that voter, a person may vote whether or not they’re enthusiastic because they get pushed to the poll?
Selzer: More and more. It used to be you just kind of let the electorate do what they were going to do if you were a campaign. We’d find out on Election Day. Well, now they have found out if we can get all of these early ballots out there, if we can get people to a poll early, we can get, we can recruit those people who don’t really care but they’re going to vote for our candidate and those votes are now there. So, enthusiasm you can certainly see, you can measure it, but it has less to do with the outcome of the election than it used to.
[Drake University politics professor Art] Sanders: The other thing that this does is it makes it a lot harder to use likely voters models to predict what’s going to happen in an election because the dynamic that Ann suggested says that candidates on both sides are out there working effectively to find unlikely voters and get them to vote.
Different pollsters use different “likely voter” screens, but most rely at least somewhat on asking the respondent, “How certain are you that you will vote in this year’s election?” People who are uncertain about voting in September or October may yet be turned out by aggressive GOTV, and their votes count just the same as those of enthusiastic political junkies.
Nate Silver shows here that data since 1998 show that polls of U.S. Senate races are less accurate than presidential polls. Excerpt:
Response rates to political polls are dismal. Even polls that make every effort to contact a representative sample of voters now get no more than 10 percent to complete their surveys – down from about 35 percent in the 1990s.
And there are fewer high-quality polls than there used to be. The cost to commission one can run well into five figures, and it has increased as response rates have declined.1 Under budgetary pressure, many news organizations have understandably preferred to trim their polling budgets rather than lay off newsroom staff.
Cheaper polling alternatives exist, but they come with plenty of problems. “Robopolls,” which use automated scripts rather than live interviewers, often get response rates in the low to mid-single digits. Most are also prohibited by law from calling cell phones, which means huge numbers of people are excluded from their surveys.
How can a poll come close to the outcome when so few people respond to it? One way is through extremely heavy demographic weighting. Some of these polls are more like polling-flavored statistical models than true surveys of public opinion. But when the assumptions in the model are wrong, the results can turn bad in a hurry. […]
Since 2000, the average Senate poll has missed the final margin in the race by about 5 percentage points. However, the average error was considerably larger in 1998 – 6.8 percentage points – with most of those errors underestimating the performance of the Democratic candidate.
In this post, Silver considers evidence that this year’s Senate polls could be systematically skewed toward either Republicans or Democrats.
In a number of elections, including 2012’s, Senate polls had a systematic bias toward one party. But the direction of the bias has been inconsistent, favoring Democrats in some years and Republicans in others. The chart here depicts the average partisan bias in Senate polls of likely voters conducted in the final three weeks of campaigns since 1990. (For raw data from 1998 onward, see here; for 1990 through 1996, see here). A year indicated as having a Republican bias means the GOP underperformed its polls. A year shown as having a Democratic bias means the Democrats underperformed theirs instead.
In 2012, Senate polls had a Republican bias of about 3.5 percentage points. That means in a state where the polling average showed the Republican ahead by a point, the Democrat would be expected to prevail by 2.5 points instead. If there’s the same bias in the polls this year, Democrats would be very likely to keep the Senate.
But as I mentioned, this bias has flipped back and forth. There was also a Republican bias in 1998 and 2006. But there was a Democratic bias in 1994 and 2002. On average since 1990, the average bias has been just 0.4 percentage points (in the direction of Republicans), and the median bias has been exactly zero.
Democrats might argue that a Republican bias has been evident in recent years – even if it hasn’t been there over the longer term. But the trend is nowhere close to statistically significant. Nor in the past has the direction of the bias in the previous election cycle been a good way to predict what it will look like in the next cycle. For several consecutive midterms, the bias ping-ponged between the parties: There was a big Democratic bias in 1994, then a big GOP bias in 1998, then a Democratic bias again in 2002, then a Republican one in 2006.
Finally, Sam Wang points out here that it’s highly likely a few candidates leading in polling averages will lose on election day.
Josh Katz at The New York Times’ The Upshot has analyzed the performance of Senate polls since 2004. He found that the predictive accuracy of polls depends on how soon the election is and the size of the front-runner’s lead. For instance, if the election is three weeks away and the front-runner leads by 3 percent or less, that candidate will still lose 38 percent of the time-nearly two times out of five. […]
If every front-runner today were to win, the Senate outcome would be 52 Republicans and 48 Democrats and independents. But history tells us to expect two or three of the current leaders to lose. […]
The above table, calculated for state-level presidential and Senate contests, shows the difference between Election Eve polls with actual election results, using the median across all races decided by less than a 10-percent margin.
Overall, these numbers set a range for how wrong we would expect a poll-based view to be. Pollsters as a group underestimate Democratic performance by an average of 1.2 percent. This bias is asymmetric: When Republicans outperformed, they did so by 1.2 percentage points or less. But in four out of eight cases, Democrats surpassed polls by 2.4 to 3.7 percentage points. […]
What if this year’s polls are off by 2 percentage points in one direction or the other? A 2-point advantage for Democrats would make the most likely outcome a split of 50 Democrats/independents to 50 Republicans. And a 2-point advantage for Republicans would propel them to a 53-47 majority.