Overall, the polls showed the largest statistical bias toward Democrats in 2020 in the history of U.S. elections, underestimating Republican performance by nearly 5 points on average.
Media and Democrat polls got the presidential, Senate, and House elections all badly wrong in staggering ways. The Economist election unit’s final presidential polling forecast, for example, gave Biden 50 more electoral votes than he actually won. An ABC News/Washington Post poll had Biden winning Wisconsin by 17 points with a week to go before election day. The final result in Wisconsin showed a 0.7-point margin between Trump and Biden. FiveThirtyEight’s polling average showed Trump barely winning Ohio by 0.8 points over Biden. The actual result was that Trump won Ohio by 8.4 points. The New York Times predicted that if the polls were as wrong as they were in 2016, Biden would still win Florida by close to 1 point. But Biden lost to Trump in Florida by 3.3 points. FiveThirtyEight’s final U.S. House polling forecast gave Democrats 20 more seats than they actually won. In the Maine Senate race between Republican Susan Collins and Democrat Sara Gideon, every single poll, all 14 of them, mostly conducted by media and Democrat polling groups ranging from the New York Times to Change Research, got the race wrong. One Quinnipiac poll gave Gideon a 12-point lead over Collins. The final result was that Republican Susan Collins won the race by 8.6 points.
After the great polling debacle of 2016, one would think that the polling industry would have tried to make adjustments to more accurately gauge what voters are actually thinking. But the statistical bias that polls displayed in favor of Democrats actually became worse in the 2020 election compared to 2016, rising from 3.0 to 4.8 percentage points.
To this day, the polling industry generally has not changed its flawed methodologies and in many cases has refused to correct for unprecedented levels of pro-Democrat bias. According to FiveThirtyEight’s Nate Silver, the polls during 2020 were “pretty normal by historical standards.” (This is almost as embarrassing as Silver’s 2016 election night call, when at 8:13 pm – even after Trump had been showing remarkable strength in early Florida and Virginia voting – Silver went on ABC News to dramatically announce to a breathless George Stephanopoulos that he had changed the chances of a Hillary Clinton victory from 72% to 76%, and added that the evening was going pretty much as the Clinton forces had anticipated.)
There is, however, one pollster who has consistently outperformed the others during the Trump era. That is the Trafalgar Group.
In 2016, the Trafalgar Group’s polling data did not just show that Trump would win the presidency, it accurately showed that Trump would get 306 electoral votes and that he would win Pennsylvania, North Carolina, Michigan, Florida, and Wisconsin, something virtually no one else was predicting.
In 2018, the Trafalgar Group released a poll showing Ron DeSantis winning the Florida Governor’s race. By contrast, the New York Times poll for that race showed Democrat Andrew Gillum up by 5 points and an NBC News poll showed Gillum winning by 4 points. DeSantis won the race on election day as the Trafalgar poll had predicted.
In 2020, polling from the Trafalgar Group had the lowest average error of virtually any other polling group in the nation, beating out polls from the New York Times, ABC News, the Washington Post, and even Rasmussen. Trafalgar Group polling correctly showed Trump winning North Carolina, Ohio, Texas, and Florida and accurately showed that the Wisconsin race would be decided within a 1-point margin.
AMAC Newsline recently interviewed the CEO of the Trafalgar Group, Robert Cahaly, to discuss why his polls often get it right when the media and even Republican pollsters keep getting it wrong.
Cahaly noted that one of the things that makes the Trafalgar Group “an industry disrupter” is that they “reject most of the polling orthodoxy.”
Among his insights, Cahaly understands the design of the polls themselves can drastically alter who responds to the sample. “Long questionnaires are just not realistic,” he said. “You are not going to get a mom or a dad to answer long questionnaires. You aren’t going to get average people. These people that you get answering 30-question polls are more invested in politics than the average person. No normal person will take the time to answer 30-question polls.”
Cahaly also thinks that what he calls “social desirability bias” can impact polling results. When asked whether there is such a thing as a shy Trump voter and how pollsters can best get shy conservative voters to answer questions truthfully, Cahaly replied, “People are hesitant to admit that they will vote for someone who is controversial. You have to get that answer.”
Cahaly has developed a variety of techniques to do just that. “What we did a lot of in 2016 is we would ask, ‘Who do you think the neighbors are voting for?’ That’s a way we found over the years to get an answer. Give people a polite way of telling you something uncomfortable. If somebody has a position on a controversial issue, they don’t want to be judged for what they think.”
“In 2016, what we found is people didn’t want to admit they were voting for Trump,” he continued. “Clinton is saying everyone who’s voting for Trump is a deplorable and all this nonsense. People were hiding their feelings. In 2020, it was even worse. Due to this cancel culture stuff, conservatives didn’t even want to participate in a poll. Period.”
So his firm dug even harder to find the hidden Trump vote in 2020. “One of the methods we used was telling people who we were,” he said. The pollster told them “just put our name in Google and you’ll see we are an actual polling group and not affiliated with a campaign.”
Ultimately, Cahaly thinks Trafalgar Group is consistently turning out more accurate polls than its competition because “other polling groups from 2016 to 2020 did not change. They said they sat down and figured out what they did wrong and were adjusting their models. But they never actually did.”
He finds this difficult to fathom. “We had a dress rehearsal for 2020, and it was called 2018,” he said. “If you look at the Governor’s race in Florida, we were the only ones who said DeSantis would win. Every other poll had the Democrat Gillum winning that race. The issue is that they can’t conceive of the fact that they have an old model and people lie.”
“People are just tired of being judged,” he said. Cahaly believes that polling in the Trump era must find ways of measuring voter sentiment that address this obvious social desirability bias.
When asked whether media polling with an overwhelming statistical bias toward Democrats amounts to “suppression” polling, as Trump alleges, Cahaly said: “It’s either done on purpose or its incompetence. So many so-called political pollsters also continue to get it wrong who poll for the Republican Party.”
One major example of polling failures in both the 2016 and 2020 elections was in gauging minority support for Republicans. Cahaly notes that Hispanics especially supported Republicans and President Trump, and not just in Florida and Texas. “It was all across the country, in Massachusetts and Wisconsin and California. When you talk to the polling establishment, they said the exit polls don’t indicate that. But you have to ask, how are they doing the exit polling? People are going to be less honest with you in person in exit polls when someone has a clipboard or an iPad.”
Cahaly thinks Trump’s true gains with minorities have been underreported. “I will tell you that across the country Trump did better than 35% with Hispanics as an average and he did better than 25% with African-Americans,” he said.