During three decades of covering Texas and national politics, I got a lot of front-page play writing horse-race political polls. They were interesting to understand and exciting to write, especially when Texas was a narrow two-party state.
But over the years, I developed a growing unease with survey research and its implications on elections. And whatever doubts I had about political polls have turned into a train wreck as I’ve watched news organizations such as CBS/New York Times and the Texas Tribune start using Internet panels to create their surveys.
First, even the old methods could have a negative impact on elections.
Democrat Jim Mattox complained to me in 1989 that a survey my newspaper had done showing Ann Richards far ahead of him shortly after she announced for governor had killed his fundraising. He called the poll a self-fulfilling prophecy because it was done at a time when she’d had a burst of publicity, and, while he admitted she was ahead, without money he’d never be able to run an adequate campaign to catch up.
Republican Kent Hance made a similar comment to me about polling that showed millionaire Clayton Williams far ahead going into the 1990 GOP primary. Claytie had gone up early on television, providing him with a boost in the polls. Inadequately funded, the rest of the field had trouble catching up. The surveys made it hard for the rest of the field to raise money. Let’s face it, the money folks like to bet on winners.
And when we started calling back the people we had surveyed, I often found people who had no idea of how they had answered. They would ask me what they told the pollster, and sometimes it was obvious they knew nothing about the candidates. But maybe that was just a reflection of an uninformed electorate.
Second, the public — and a lot of news people — have no idea how to actually read polls.
The public often comes away from news media polls believing they are like a pot of gold telling you exactly how the race is lined up. In truth, they are at best an approximation. For one, they are just a snapshot in time that can be affected by events that happen within or without the period that the survey was conducted.
For example, in Gov. Ann Richards’ re-election campaign, the survey research done for my newspaper was conducted over three days. The survey found George W. Bush winning on day one. But then on day two, Richards received the endorsement of Ross Perot and it was the news of the day. In the survey done that night, Richards closed the gap with Bush to where he was just barely ahead. Day three of the survey once again showed Bush with a substantial and winning lead. The final result was a poll that showed Bush ahead, but not substantially so. However, for those of us looking at the poll, we all knew it actually was saying Bush was going to win. So the stories came out this mishmash of “They’re neck in neck but the election is trending in Bush’s direction.” What does that tell the reader?
Also, most reporters and readers pay little attention to the margin of error. Polls are, after all, just a statistical extrapolation of the group of people surveyed. The margin of error is supposed to take care of mistakes or outliers. If a politician gets 42 percent support in a poll with a margin of error of 3.5 percentage points plus or minus, that really means their support could be as high as 45.5 percent or as low as 38.5 percent. If you really want to get an idea of what has gone on in this year’s governor’s race, look at the Pollster aggregation of polls that has a bubble for the margin of error.
Third, polling has taken a beating in recent years. People refuse to answer. Cell phones with one area code can be located anywhere in the United States. For the sake of cutting costs, a lot of telephone surveys have become robo calls with the respondent punching numbers on the phone so you don’t really know who is being surveyed.
But the most bothersome trend involves Internet survey panels as put together by YouGov.
The panels essentially are built off of groups of people on the Internet who agree to be surveyed for consumer products. So essentially, you start with a panel of people who have computers and enough disposable income to want to be surveyed about consumer products.
At the start of this year, The New York Times had a policy against even quoting a poll done by this method.
“Self-selected or ‘opt-in’ samples – including Internet, email, fax, call-in street intercept, and non-probability mail-in samples – do not meet The Times standards, regardless of the number of people who participate.” http://www.nytimes.com/packages/pdf/politics/pollingstandards.pdf
But in June, the Times announced it would be joining CBS in using YouGov panels to do surveys in this year’s election. One of the big differences is that their panel would use probability panels instead of non-probability panels. What that means, if I understand this correctly, is that the survey chooses the respondents it believes will represent the people who actually vote. Then they weight each respondent to represent the historic demographic pattern of voting.
This brought the Times a lot of bad publicity.
For a relatively unbiased look at the survey methodology I’d point you to this story from the Pew Research Center. But the Times took it with both barrels from the nation’s professional pollsters. Take a look at Politico and The Washington Post. The American Association for Public Opinion Research derided the polling as untested:
Until this week, the Times maintained and published a set of rigorous standards to guide the determination about when polling data could (and could not) be used in a story. Those detailed standards were summarily removed and replaced with a statement indicating that the old standards were undergoing review and that “individual decisions about which poll meets Times standards and specifically how they should be used” would guide decisions in the interim. This means no standards are currently in place.
Which is a long, roundabout way of bringing us to this week’s survey about the Texas governor’s race.
The CBS/NYT/YouGov survey found Republican Gregg Abbott leading Democrat Wendy Davis 57 percent to 37 percent. With a margin of error of +/- 3, that means the actual survey could be 54-40 — still a substantial Abbott lead but not the 20 points as widely reported.
By manipulating the margin of error, I got a result closer to the Texas Tribune/University of Texas survey that found the race at 54-38. But those pollsters also used a YouGov panel to develop the poll. So other than having a different set of social scientists massaging the panel, you still have the same closed-loop of people taking the survey.
Now, as much as my Democratic friends would like me right here to say the polls are wrong, I’m not going to. This state has a majority Republican voter turnout and likely will for some time.
But, here comes the self-fulfilling prophecy part.
The Democrats this year through the Davis campaign and Battleground Texas have spent large sums of money on a voter-turnout effort. I doubt even if it was totally successful that it would win any statewide elections this year. It might make them close, and that would change the psychological landscape going into the next election. However, with all the media attention on surveys that show Davis getting clobbered, those potential voters are likely to ask themselves why they should bother when it is already over.
Maybe it is time to give the political polls a rest. It won’t happen, but it should.