Yes, the title is meant to be sarcastic.
But only somewhat. Whatever you think of pollsters’ motives—are they trying to get it right, or are they skewing results to influence public opinion?—they do have quite the challenge in predicting voter turnout by party.
If they’re presenting rigged polls, how much Democratic turnout is just right? How much is too much? And at the end of the day, don’t they care about their own reputations for accuracy, and how they will suffer if they present biased polls that fail to predict the result? Or is it okay because they’re trying to create a certain result (hardly a foolproof proposition, that)?
But for those pollsters who are trying to get it right, what model should they use for their turnout predictions? 2008 or 2010? That is the question. And furthermore, even if pollsters ask respondents how likely they are to vote, and then see what relative percentages of Democrats and Republicans say they are likely and then weight the poll results accordingly, does this represent reality when they’re getting only a 10% response rate? Isn’t it quite possible that the responders already represent a skewed sample that’s likely to differ from the population as a whole on the question of motivation to vote?
Here are some sobering figures on response rates:
Note that even compared to a few years ago it’s significantly worse. What does this mean? It’s not just the result of the inclusion of cell phone users, either:
The most recent decline results partly from the inclusion of cellphone numbers in its samples in order to reach the rapidly growing number of American adults who have a mobile phone but lack landline telephone service. But the Pew Research landline response rates have also fallen (from 25 percent in 2007 to 10 percent this year) and are now only slightly higher than the response rates currently achieved with cellphones (7 percent).
For what it’s worth, my guess would be that cell phone users would tend to be younger and therefore more likely to be Obama voters, and so their slight under-representation could mean a slight underestimate of the Obama vote (or the Ron Paul vote). But far more important is the more general question of whether the responders represent a random group, or whether they differ from non-responders in significant ways, and if so what those ways might be.
I challenge anyone to come up with an answer, because you can’t ask the non-responders. They are, by definition, not talking.
The technical term for the problem is non-response bias. Here’s a discussion of the phenomenon as it relates to surveys of all kinds, and the following describes some ways to try to control for it:
Suppose again that additional demographic or database variables are available for all members of the targeted sample group. These variables are used to create sub-groups containing respondents and non-respondents. Weights are then calculated based on the proportions in each sub-group and applied to the respondents to reflect the total sample population. Comparisons on key variables are then observed between the unadjusted and weighting-class adjusted respondents. If clear differences are detected, then non-response bias is assumed to be at fault and the weighting-class adjustments are used as they provide results with less bias. Poststratification is another technique similar to weighting-class adjustment, except that the procedure uses population counts instead of the total sample counts. The downside to these techniques is that they assume that the differences between respondents and non-respondents are captured in the subgroups, and that there is no rule of thumb for comparing adjustments to determine which to use.
As I said, pity the poor pollsters.
Once again let me say that I’m not assuming that all, or even most, pollsters are trying for that sort of accuracy. My point is that the ones that are attempting to get it right face a formidable task. Of course, they knew that when they went into the field. But back then, the response rates were so much better that non-response bias wasn’t considered such a big deal. Now that it’s grown, there’s a much bigger chance that it represents a very big deal.
Many people in the blogosphere focus on the dilemma of party proportion as vitally important. And there’s no question that it is. But that’s an easier problem to see; after all, one can view (for the most part) the party affiliation each pollster uses, even if it’s nearly impossible to tell if those proportions would be correct for this voting year (I might also mention that there’s the problem of people calling themselves “independents” but who really are disaffected former members of a party who vote pretty reliably with that party nonetheless). Non-response rate bias is much more hidden, unknown, and probably more difficult to control.
I don’t know if I’m especially typical of anything, so the following represents a sample of one. But I’m an older person who has only a cellphone, my motivation to vote is at the highest level (would crawl over broken glass etc. etc.), and I probably would not answer a pollster.
Not that I’ve ever been asked.
[ADDENDUM: Ed Morrissey weighs in on polls and party affiliation.]