Pollsters indulged in breezy self-congratulation in the aftermath of the 2022 midterm elections. Pre-election polls, they declared, did well overall in signaling outcomes of high-profile U.S. Senate and gubernatorial races.
In an allusion to polling’s stunning misfires of 2016 and 2020, Joshua Dyck, director of the opinion research center at UMass Lowell, asserted as the 2022 results became known: “The death of polling has been greatly exaggerated.”
Yet, a sense of doubt lingered: While they did not repeat their failures in recent national elections, polls in 2022 were more spotty than spectacular in their accuracy, and performance assessments often depended on which poll was consulted. Or perhaps more precisely, on which polling aggregation site was consulted. Aggregators typically compile and analyze results reported by a variety of pollsters. They often adjust the composite data to emphasize findings of recently completed surveys or to minimize effects of unusual or “outlier” polls.
Misses, near and far
As compiled by the widely followed RealClearPolitics site, polls collectively missed the margins of victory by more than 4 percentage points in key 2022 Senate races in Arizona, Colorado, Florida, New Hampshire, Pennsylvania and Washington.
Differences between polling averages and outcomes were especially striking in Colorado, Florida, New Hampshire and Washington, where incumbents won easily. In gubernatorial races, deviations from polling averages of 4 percentage points or more figured in the outcomes in Arizona, Colorado, Florida, Michigan, Pennsylvania and Wisconsin.
Forecasts posted at Silver’s FiveThirtyEight.com diverged from outcomes somewhat less markedly than those of RealClearPolitics — but still anticipated closer Senate races than what transpired in Colorado, New Hampshire and Pennsylvania.
Expectations that Republicans would score sweeping victories no doubt were buoyed by the predictions of RealClearPolitics. It projected that the GOP stood to gain three Senate seats and control the upper house by 53 seats to 47 — an outcome that proved illusory.
While hedged, the final, so-called “Deluxe” forecast posted at Silver’s FiveThirtyEight.com and updated on Election Day did little to dampen expectations of a GOP wave. The forecast said Republicans had a 59% chance of winning control of the Senate.
Elections and polling controversies
To say that polling performance was spotty in 2022 is not to say that election surveys were all off-target.
Far from it.
Even so, as I noted in my book, “Lost in a Gallup: Polling Failure in U.S. Presidential Elections, "It is a rare election that does not produce polling controversies of some sort.” And that’s not so surprising, given that polls are conducted by a variety of public entities, some of which have partisan orientations.
This time, controversy swirled around Republican-leaning pollsters such as Trafalgar Group and the inclusion of those polls in averages compiled by RealClearPolitics. Incorporating such data, critics claimed, led RealClearPolitics to overstate Republican prospects. The senior elections analyst for RealClearPolitics, Sean Trende, disputed such an interpretation as a “theory that doesn’t work well.”
Trafalgar, which in 2021 had been rated A-minus for accuracy by FiveThirtyEight.com, saw its surveys conspicuously misfire in 2022. In New Hampshire’s U.S. Senate race, for example, Trafalgar’s final pre-election poll indicated that Republican Don Bolduc had taken a narrow lead. Bolduc lost to incumbent Maggie Hassan by 9 percentage points.
Those were no small misses, and Trafalgar’s inaccuracies attracted criticism even from friendly sources. “They were not reliable indicators of what was to come,” wrote Scott Johnson at the Republican-oriented “Powerline” blog. Trafalgar did not respond to an email seeking comments about its 2022 polling performance.
Polling misses tended to be bipartisan, though. Data for Progress, a Democratic-leaning pollster graded as a “B” in 2021 by FiveThirtyEight, estimated closer Senate races than what transpired in Colorado and New Hampshire, and signaled the wrong winners in Arizona and Nevada.
Data for Progress nonetheless seemed eager to assert success for its polls, posting online what appeared to be an incomplete draft of a post-election news release that said it “outperformed the polling averages, and was more accurate than any other pollster” in the midterms. The draft contained several placeholders marked “xx,” indicating where data points were to be inserted.
Pollsters not shy about congratulating selves
So, what can be taken away from polls of the 2022 midterms?
The outcomes confirmed anew that election polling is an uneven and high-risk pursuit, especially at a time when some pollsters are experimenting with new methodologies to reach would-be respondents while others are still relying on traditional, telephone-based techniques.
The 2022 outcomes also confirmed a self-congratulatory impulse that is never very distant for practitioners in a field that has known much error and disappointment.
Pollsters are not necessarily shy about boasting if their estimates are reasonably close to election results. This tendency has been apparent intermittently for more than 80 years, since George Gallup placed double-page ads in Editor & Publisher magazine in 1940 and 1944 to proclaim the accuracy of his polls in presidential elections those years.
The midterms also confirmed the news media’s insatiable appetite for poll results. Fresh polling data — much of it produced or commissioned by news outlets themselves — seemed inescapable during the closing days of the 2022 campaign. As they usually do in national elections, polls shaped expectations which, in some cases, faded as votes were counted.
- ^ midterm (theconversation.com)
- ^ 2016 (www.pewresearch.org)
- ^ 2020 (theconversation.com)
- ^ asserted (www.salon.com)
- ^ took to Twitter (twitter.com)
- ^ failures (theconversation.com)
- ^ Aggregators (www.aapor.org)
- ^ outlier (fivethirtyeight.com)
- ^ RealClearPolitics (www.realclearpolitics.com)
- ^ New Hampshire (www.realclearpolitics.com)
- ^ Pennsylvania (www.realclearpolitics.com)
- ^ Arizona (www.realclearpolitics.com)
- ^ Michigan (www.realclearpolitics.com)
- ^ Wisconsin (www.realclearpolitics.com)
- ^ FiveThirtyEight.com (fivethirtyeight.com)
- ^ Pennsylvania (projects.fivethirtyeight.com)
- ^ Expectations (www.nytimes.com)
- ^ predictions (twitter.com)
- ^ final, so-called “Deluxe” forecast (fivethirtyeight.com)
- ^ wrote (fivethirtyeight.com)
- ^ Siena College/New York Times surveys (www.nytimes.com)
- ^ Lost in a Gallup: Polling Failure in U.S. Presidential Elections (www.ucpress.edu)
- ^ controversy (www.nytimes.com)
- ^ Trafalgar Group (www.thetrafalgargroup.org)
- ^ critics claimed (gelliottmorris.substack.com)
- ^ disputed (www.realclearpolitics.com)
- ^ Trafalgar (nymag.com)
- ^ A-minus (projects.fivethirtyeight.com)
- ^ conspicuously misfire (slate.com)
- ^ had taken a narrow lead (www.dailywire.com)
- ^ lost (www.realclearpolitics.com)
- ^ held a slim lead (www.thetrafalgargroup.org)
- ^ wrote (www.powerlineblog.com)
- ^ Data for Progress (www.filesforprogress.org)
- ^ graded as a “B” (projects.fivethirtyeight.com)
- ^ estimated closer Senate races (www.dataforprogress.org)
- ^ incomplete draft of a post-election news release (www.dataforprogress.org)
- ^ experimenting (www.nytimes.com)
- ^ shaped expectations (thehill.com)
Authors: W. Joseph Campbell, Professor of Communication Studies, American University School of Communication