Friday, October 28, 2016

Polling

[Prefatory Note: Our Statistics and Polling Department (SAP) have been troubled by the lack of consistency not only between different polls, but even more so by the sometimes wild gyrations of any one polling firm (e.g., an ABC poll showed Clinton ahead by 4 points one day and 12 points a week later and nothing really had changed to explain such a jump. This piece from the WSJ (Oct. 28) explains well some of the polling problems]

Each day, millions of calls are placed to phone numbers that have been disconnected or are no longer in service. Everyone misdials—men and women, rich and poor, blacks and whites. Scott Richards wants to replace the familiar “Your call cannot be completed as dialed” message with a different one: “Would you like to take a survey?”His is among a spate of ideas bubbling up for new ways to sample public opinion. The alternatives are coming because the traditional method—randomly calling phone numbers again and again until someone answers—has grown far more labor intensive and costly in recent years.
One idea is to push surveys on people who mistype the name of a website. Polling companies are assembling large, standing panels to survey online. For certain websites, Google is asking internet users to answer surveys before they can get access. Others monitor Twitter posts to gauge public sentiment. The new survey approaches give some in the industry pause because they make it harder to tell good research from bad. In ways pollsters don’t fully understand, the various methods seem to produce different results. In August, for Donald Trump, surveys done online showed more support than phone polls did; in September, it was Hillary Clinton for whom online polls produced better numbers.
Political polling is already hard because it is difficult to predict voter turnout. The differences between online and phone results add another layer of uncertainty. “A lot of experimentation is going on,” said Charlie Cook, a political analyst and publisher of the Cook Political Report. “We’re in a period of transition, and we’re having to deal with it.”Much of the ferment traces to changes that have driven up the cost of the kind of polling still considered the most reliable, which is well-designed telephone surveys.
0:00 / 0:00

For decades, randomly calling a set of phone numbers was a relatively simple and effective way to sample opinion. Then, response rates began to tumble, starting with the advent of call screening.
In 1997, 36% of households sampled agreed to participate in a poll, according to the Pew Research Center. Now it is 9%. This means thousands more calls must be made for a telephone survey to reach a sufficient sample.
Compounding the problem is that roughly half of households now have only cellphones, and a 1991 federal law prohibits calling mobile phones with an auto-dialer. To call these people, pollsters must dial all 10 digits by hand. “This equates to a substantial number of interview hours,” said Courtney Kennedy,director of survey research at Pew, which conducts about 30 U.S. surveys a year.
Pew told its calling houses this year it wanted three-quarters of responses to come from cellphones, versus none a dozen years ago. Its national surveys will cost 75% more this year than in 2004 and require several more days to obtain enough respondents. A single well-designed phone survey now can cost as much as $100,000.
“When your costs go up like that, there’s less good-quality research,” said Cliff Zukin, a Rutgers University professor of public policy and political science.
The law against using auto-dialers to call mobile phones was aimed at abusive telemarketers, and class-action lawsuits brought under the law have resulted in settlements in the tens of millions of dollars. “The public-opinion profession is sort of the dolphins in the tuna net,” said Duane Berlin, general counsel of the Council of American Survey Research Organizations.
The difficulty of predicting voter turnout was a large part of many recent polling blunders, including surveys earlier this year showing Mrs. Clinton ahead in a Michigan primary she badly lost. In the general-election campaign, some researchers say the high disapproval ratings both main candidates have make it even harder to predict turnout.
Dan Wagner, head of Chicago research firm Civis Analytics, studies what individuals did in past elections to see whether he should deem them likely voters. Still, getting that call right doesn’t help if the people can’t be reached. Late one night in October 2014, Civis senior data scientist David Shor pulled Mr. Wagner aside in their Washington office to review some disturbing data. To record the opinion of one young Hispanic male took them 300 phone calls, while only 10 were needed to reach a woman over 65.
If that continues, “we no longer have a viable political business,” he recalls thinking.
Instead of just phone polling, Civis began also reaching out via large online survey panels, which are pools of people contacted over and over to give their opinions. It is a technique that has grown rapidly over the past decade.
Initially, survey firms assembled such panels using traditional phone-polling methods, then gave members computers to answer questions. Later, some began adding panelists who responded to online offers to make money by answering surveys. Now, drawing from an online pool of survey takers is a common research method, with the pools weighted to resemble broad U.S. demographics.
Mr. Wagner at Civis works to scrub the panels of professional survey takers who answer questions all day just to collect small fees. In search of accuracy, he draws from multiple survey methods and develops complex weighting techniques.
Increasingly, he and others are experimenting with another polling shop: Google.
The search giant has teamed up with more than 1,000 websites to require users to answer a survey before accessing the site’s content. Google charges researchers to set up a survey, then gives a cut to the participating websites. Its Google Surveys can reach thousands of poll respondents in a matter of hours, at low cost.
Methods like that, or pitching surveys to people who mistype a website or reach a disconnected phone number, are known as “river sampling”—akin to dipping a net in a stream and collecting whatever swims by.
To induce people to take its surveys, Google limits them to 10 questions. That posed a challenge for research firm Echelon Insights when it used the system to poll viewers of the early Republican primary debates last year, because there were more than 10 candidates. Since it couldn’t ask a separate question about each one, Echelon had to use a photo of the list and ask respondents to write in whom they favored.
Google guesses respondents’ demographic data, such as age and gender, based on the same browsing information it tracks for targeting ads. Responses are weighted to reflect broader demographics.In 2012, polling guru Nate Silver found Google Surveys the second-most-accurate poll in that year’s presidential election. Pew is less enthusiastic. A separate analysis by Pew the same year found that Google had gotten respondents’ gender right only 75% of the time. Pew also uncovered some quirks, such as that people who answered the surveys tended to be slightly more conservative than the general populace, and, counterintuitively, far less likely to say they looked up medical information online. Pew decided not to use the system. A Google spokesman said the technology has improved since Pew’s 2012 analysis.Pew and research firm RTI International now are analyzing the feasibility of Mr. Richards’ idea. His company, Reconnect Research, invites people to take a survey when they misdial telephone numbers or reach one that is unavailable because of some network glitch. Doing this seems to collect an evenly distributed sample that doesn’t require much weighting, said Karol Krotki, a senior research statistician at RTI. Jon Krosnick, a Stanford political-science professor, worries that most online surveys have fundamental problems. Unlike traditional phone polls, there is no master list of internet users that researchers can draw from, meaning each member of the population doesn’t have an equal, random chance of being surveyed. “You can get the proportion right, but it’s not a random sample of men and not a random sample of women,” it’s only a survey of those who happen to be online, Mr. Krosnick said. The telephone is more effective because pollsters can repeatedly call people until they eventually answer, he said, whereas online surveys sometimes are answered by people who seek them out. In an experiment this spring, Pew asked the same questions across nine online surveys and compared the results against government benchmarks from census data. Pew found major variations. While the number in online polls who said they had a driver’s license matched benchmarks, the number of respondents who said they smoked diverged widely. Also, in the online samples, respondents skewed toward low-income adults who had no children and were more likely to live alone. Estimates based on blacks and Hispanics were particularly far off from what census data showed. Stephen Ansolabehere, a Harvard professor who has studied differences between online and phone surveys, suggests sticking with the innovations.“Every new communications technology has changed the way surveys operate,” he said. “One approach is to say all these new technologies are untrusted, they’re unproven, let’s just not trust them. And the other approach is: We’ve got to figure out how to use them, because the emergence of the these new technologies is making the old ones obsolete.”

No comments:

Post a Comment