On Sept. 11, Nate Silver sent the tweet that rocked the polling world.
Public Policy Polling, a left-leaning private polling firm based in Raleigh, immediately rebutted the claims. They had withheld some data from a poll they conducted in a Colorado state senate recall because they thought it was wrong. They had found that voters in the precinct (that had voted Democratically by a wide margin in past elections) intended to recall the Democratic candidate by a 12 point margin. The candidate was then recalled by 12 points, and PPP director Tom Jensen told the Daily Tar Heel that he released the full poll to start a conversation about the gun industry’s power, as the candidate was recalled for supporting a gun control bill that the public generally supported.
But Silver, an acclaimed statistician who famously predicted the 2008 presidential elections, called the decision — in an increasingly-heated series of tweets — suppression, biased and playing “fast and loose with methodology and disclosure standards.”
Not only did the Twitter battle provide entertainment to some, who immediately deemed the spat #NerdFight, it provided some interesting food for thought. Polls are used by campaigns and by candidates all the time to prove their points, and for the most part, their audience assumes these polls are accurate. We don’t expect these polls to withhold data or strategically frame questions to prove their points, but do they?
During the Amendment One campaign, the Protect All N.C. Families coalition could not regularly commission polls, due to a lack of resources, so they relied on outside polls, particularly Public Policy Polling (Kreiss and Meadows). But at one point, PPP veered away from the campaign’s message of focusing on the effects the amendment would have on the health care coverage for children of unmarried parents and for domestic violence protections. PPP asked respondents if they knew the amendment would take away civil unions and domestic partnerships, and then followed up with asking if they’d vote for an amendment that did that — which moved the race to a tie, rather than the winnable race the campaign had been seeing when they just focused on children and domestic violence.
PPP showed the campaign the results of the poll before it was published, and the campaign staff was upset, since the poll focused on the message they had been trying to avoid. See the following exchange between Celinda Lake, from a prominent national Democratic consulting firm, and campaign manager Jeremy Kennedy (recorded by Kreiss and Meadows):
Lake: “I wish they would just ask the question, why can’t they get on this program, that it would do away with domestic violence protections and children’s healthcare, or legal protections and healthcare for children of unmarried parents. Ask that question, do us a favor here or stop it…. So tell them we don’t want it released, or, no we don’t want it released. If they release it they do it over our objection. Obviously we don’t have any control, but they need to understand we will consider the release of this data an unfriendly act.”
Kennedy: “I mean if they didn’t do it the last two months because they didn’t think the results would help us, I don’t know why they feel the need to do it this month. I think it’s probably because like you said they think that those last two questions help us.”
Public Policy Polling did release the poll over the objections of the campaign, but if what Jeremy Kennedy says is true, PPP had not released data in the past and was basing what it would release on whether the results would seemingly help the campaign.
PPP has also been criticized by the New Republic for its collection of its sample — the firm uses random deletion of respondents to make its sample resemble the ranges of race, age and gender found in Census data and prior exit polls. The article pointed to an instance when PPP had a wide amount of variation for the predicted race of Georgia’s voters in the 2014 Senate race (71 percent white and 24 percent black) compared to the 2012 general election (61 percent white and 30 percent black). PPP rebutted that based off past polling experience in Georgia, African American turnout would be lower when there is no black candidate running.
But do we want polls tweaking the samples to make them seem right? Or for them to withhold data when it doesn’t seem right? Or to base its released data on whether or not it will help a campaign? It harkens back to Mark Twain’s infamous quote: “There are lies, damned lies and statistics.”
In Public Policy Polling’s defense, it is a private firm. But thousands of people read their widely-distributed polls, and the public tends to believe their findings. We know that poll results can influence the way people vote or whether or not they do vote. But in the end, they too might just be part of the performance of politics.
Kreiss, Daniel, and Laura Meadows. “Campaigning from the Closet: The Contexts of Messaging During the Campaign to Defeat North Carolina’s Amendment One.”
Cohn, Nate. “There’s Something Wrong With America’s Premier Liberal Pollster: The Problem with PPP’s Methodology.” New Republic.
Jensen, Tom. “Reflecting on the Colorado Recalls.” Publicpolicypolling.com.