It should come as no surprise to readers of “Our America” that I closely follow political polling. While I don’t agree with their particular editorial bias, Real Clear Politics does an excellent job of compiling information from races all over the nation. If you want to see where the polls stand – from Alaska to Wyoming – check them out.
The Polls
I first learned about polling in history class with the famous “Dewey Defeats Truman” Chicago Tribune headline after the 1948 Presidential election. Of course, the paper was being held up by a smiling Harry Truman – the actual Presidential winner in 1948. What went wrong? It was a telephone poll, and overwhelmingly went for the Republican Dewey. But not everyone had telephones in 1948, so the resulting unbalanced sample unintentionally biased towards the Republican. Truman won. The lesson: polls aren’t always right.
My next polling experience was while working for the Carter/Mondale campaign in 1976. My boss, Michael Jackson (he looked a lot more like the lineman from Nebraska he was then the singer he wasn’t) wanted an idea of how Carter was doing in the Cincinnati area. So one night we shut down our twenty line phone bank and acting as “ABC Polling” (named after his previous employer – the “Anyone But Carter” coalition). We made a thousand phone calls. We randomized our calls by calling every fifth name in the voting list, and looked up their number in the phone book.
I don’t know the “mathematics” of polling, by Mike knew that 1000 responses would give us the rough numbers to see where we stood. He figured a 5% margin of error. And unlike the 1948 poll, by 1976 almost everyone had an in-house phone line. So the phone book actually was an accurately “random” sampling of the voters. The results showed we were close, and in fact, Carter did win Ohio by a narrow margin, icing his electoral victory over Ford.
Modeling
Polling has dramatically changed since those days. First, the days of in-house phones, listed in the phone book, are over. Less than 40% of Ohioans have “land lines” anymore (NBX). So calling from the “phone book” would be an irretrievably biased sample of those who still managed to have land-lines, with whatever political conclusions that might mean. Certainly, if nothing else, it would be a much older “sample” then the general public.
So pollsters had to find a different way to reach out, rather than just taking “mass samples” like we did in 1976. New polling depends on developing “models” of what the electorate will look like in the next election. For example, if the model predicts that 15% of the voters in the upcoming election will be between 18 and 25, then they need to get a sample that includes that age group. And of course, they have to get enough in that sample so that one or two “outlier” answers won’t skew (or screw) the ultimate results. But no matter how many 18 to 25 year old’s they talk to, it will never be weighted more than 15% of the total outcome of the poll. That’s what modeling is all about.
Using that tactic, polls in the 1990’s were highly accurate in predicting election outcomes. The whole “trick” was getting a good “model” for the next election. But with the 2004 Presidential election, poll modeling got a lot trickier, and the polls themselves became less accurate. So what happened?
Wedge Issues
In 2004 John Kerry was the Democratic candidate for President. Ohio was a crucial state for Kerry to win, and pre-election polling showed Kerry in the lead (though not with a 50% majority). But the Republican Ohio Secretary of State, Ken Blackwell, pushed a state Constitutional amendment onto the ballot, banning gay marriage. That amendment drastically altered the voter turnout in 2004, making the polling “model” inaccurate. Folks showed up to vote against gay marriage who generally didn’t vote, and it led to a Bush victory.
It’s called “wedge” politics; finding issues that are so energizing that it brings out folks who haven’t voted in the past. In 2004 it was gay marriage. In 2022 it’s likely to be the Dodd Decision by the US Supreme Court, overruling Roe v Wade and granting the power to control abortion laws to the individual states.
What Model
There was a lot of criticism of polling in the 2016 and 2020 elections. Some of that criticism is justified: many of the pollsters used models that didn’t include the power of the “MAGA” movement, undercounting Donald Trump’s expected vote. In addition, a new kind of poll subject came about – the liar. Folks know that polls effect campaigns, and some take advantage of the pollsters by answering questions the opposite way than they will vote. Some are doing that because they are embarrassed to be “MAGA” supporters. But other “MAGA” supporters say that the pollsters are invariably against Trump, so why not skew (or screw) their product.
In both 2016 and 2020, the actual Presidential results were much closer than the polling indicated before the vote. When you look at the polling for 2022, you have to wonder: do the current models take into account the surge of women registering to vote, many to vote against the Dodd decision results (and therefore Republicans)? Look at the outcome of the August Kansas referendum to give the legislature the authority to ban abortions. It not only lost by a huge margin, but an August election had a turnout verging on the November Presidential election of 2012. Here in Ohio, women newly registering to vote is up 7% since before the Dodd decision (Axios).
The Race
I am a track coach. The 400 meter sprint is the race that takes one complete lap of the track. I’ve had runners who charge to the lead, and hold on for the full 400 meters. I also had other runners, just a successful, that waited until the last 100 meters to make their charge down the “home stretch” to take the lead and win. So I am well aware that you can’t necessarily “call” a race when it’s half over at the 200 meter mark. I don’t know whether the front runners can hang on, and the pursuers are just waiting to surge ahead. The “winner” at the 200 is often NOT the winner of the race.
Polls, even at their most accurate, are merely a “snapshot” of where things stand at that moment. They are most useful to campaigns to target “the rest” of the race, whether they are holding on or surging. They aren’t designed to predict an outcome.
Home Stretch
Cornell Belcher was the leading pollster for the Democratic National Committee and the Obama campaigns. He made a very important point in an interview last night. The media, and Americans, look at polls as “outcomes” rather than as “snapshots”. Belcher made the observation that if your winning 46 to 43 – it means that 11% haven’t made up their mind – and there’s no reason to believe they’ll split the same way. If they split 8 to 3 against you – you lose by two percent.
When the poll shows you winning by more than 50 % – that’s different. That’s still not a guarantee, but at that “snapshot”, it would be a win.
So polls in September don’t predict wins in November. They simply show where a candidate stands now. And they are wholly dependent on good modeling. So don’t count any candidate out just yet – we haven’t even hit the “home stretch”.
Today’s Snapshot
- Don’t forget the margin of error +/-2% – and the undecided
- Ohio Senate – Ryan (D) 47, Vance (R) 46 (undecided 7)
- Arizona Senate – Kelly (D) 47, Masters(R) 45 (undecided 8)
- Georgia Senate – Walker (R) 47, Warnock (D) 44 (undecided 9)
- Florida Senate – Rubio (R) 47, Demings (D) 44 (undecided 9)
- North Carolina Senate – Budd (R) 47, Beasley (D) 44 (undecided 9)
- Pennsylvania Senate – Fetterman (D) 49, Oz (R) 44 (undecided 7)
- Wisconsin Senate – Barnes (D) 49, Johnson (R) 47 (undecided 4)