The day when FiveThirtyEight and the pollsters bombed

Trump won, 538 experts lost.

In your face!

This is a story of people looking down on you. More than that, this is a story of fake scientific data and deception. This is about those who tried to control the result of the election. And lost anyway.

I’ve been dreaming of writing this very article for months. All those experts, day after day condescendingly belittling people like me, who wouldn’t be convinced to accept the reality of their data…


Now it’s time for a little revenge. A sweet opportunity to unload all the frustration of the last months. They won’t mind: they still have all the clout and visibility in the world, plus they have all the necessary credentials. I’m no one and I’m not qualified. My berating them wouldn’t change a thing.

Wait for the next electoral cycle, where everyone will be mesmerized, again, by polls, poll aggregators and outcome estimates.


You see, one thing is to lose. A completely different thing is to lose while playing dirty. That’s when you deserve to get scolded.

Especially at a crucial turning point in America’s history.

Saying “I didn’t play dirty, I just looked the other way and pretended everything was ok” is not a defense.

We’ll get to the meat of my accusations; but first we need a little context. If you want you can skip it. At least be sure to check the final chapter, “Polls you can’t trust“.


Brief analysis in the aftermath of the vote.


The final result of the election will be 306-232 in favor of Trump. Really impressive, and yet based on some flimsy margins of votes in those few key states where the candidates were campaigning, because they knew they were up for grabs.

Clinton obtained a slight advantage in the total number of votes nationally (a fraction of 1%). There are extremely good reasons for this “popular vote” not to matter. But even if you disagree, remember: if it mattered, candidates would have used a different strategy, people would have voted differently, so you can’t pretend those totals represent a sort of actual vote.


I must say that I was wrong, sorta, in my being so confident in my prediction of the result, going against the common wisdom, because the signals I saw were real, but not so strong and decisive to give a landslide victory to Trump (although in a day I’ll give you one extra reason for Trump’s success that goes beyond such considerations).

It’s disheartening to see so many people so easily manipulated into almost giving the USA to Hillary Clinton, regardless of what she did or what she represents, just because the media pushed in that direction.

And yet, the Monster Vote predicted by The Conservative Treehouse didn’t materialize. There was no mass uprising against the Europeanization of America. I criticized the concept some time ago: it’s true that there was exceptional enthusiasm and participation at Trump rallies, and Trump got an exceptional result in terms of Primary votes. But this kind of base support cannot easily expand at national levels, when facing such a strong opposition from all the significant media and political players.

People that are not really into politics simply cannot accept the idea of rushing to support a guy that everyone around them is saying is a monster. The Monster Vote was killed in its infancy by the Monster Narrative.

Remember, though: if you were to test the average voter for their knowledge of politics and current issues, it’s precisely this kind of commitment, mostly by people who shun the traditional media, that would put the average Trump supporters way above their liberal counterparts, despite hysteric newscasters’ protests of the opposite.


But I was wrong too. I expected a more robust result; Trump essentially got the same amount of votes Mitt Romney obtained 4 years ago. The thing is, if you consider Michigan for instance, Trump obtained spectacular results in the industrial regions, convincing blue-collar voters to shift allegiance, from D to R. But he lost votes in the most affluent counties. The rich white Republicans, the NeverTrumpers, didn’t support him and neutralized the effect of all those Democrats, Independents and new voters that chose change.

Anyway, this election was not so much won by Trump as lost by Clinton, bleeding more than 5 million votes from Obama 08. But the results are still provisional, because lots of votes are still being counted, which is quite absurd if you ask me.

If you want to understand what really this vote represents, look no further than Washington DC itself. The political class and career bureaucrats living around the capital flipped the result for Virginia in favor of a narrow Clinton victory. This was expected. Even more inevitable the vote in DC:

  • Clinton 93%
  • Trump 4%

Think about it. It’s so obvious people aren’t even discussing it. And yet, when such a staggering result is representing the will of people who got rich thanks to the Central Bureaucracy largesse, you may really stop asking yourself what this is all about. And embrace the fact that a nation showed a giant middle finger to its rulers.


There’s one last significant data point that is worth discussing, represented by Nevada, a toss-up state eventually going to Clinton. Here both myself and most pollsters got it wrong, predicting a Trump victory (RealClearPolitics average: Trump +0.8%; Election result: Clinton +2.4%).

There’s a new, very significant phenomenon, involving New Mexico first, then Colorado, Nevada and eventually Arizona, with some caveats for Florida too: traditionally conservative states being flipped to the Democrats thanks to a gradual, massive influx of new Hispanic voters.

I didn’t expect this effect to be already predominant, with the exception of New Mexico. Pollsters didn’t either, probably: despite their favoring Clinton, this remarkable mobilization of Latinos was not entirely factored in (maybe it’s difficult for pollsters to get in touch with such a segment of the population).  That’s why Trump’s performance was worse than predicted, or not much better than predicted, in states where a lot of fresh immigrants were having their say in the results.

This aspect should be considered when evaluating the amount of most pollsters’ bias in favor of Clinton.


It would be also interesting to get to know something about voter fraud. I think in such a contested election it’s something to be expected, especially where the lack of an ID requirement and a huge immigrant population tend to create an incentive for non-citizens to vote.

Not convinced? Here’s an example that may concern you. suggests that some 3 million votes (!) were cast by illegal immigrants. Ok, let’s take this figure with a grain of salt. But it’s really difficult to believe that they weren’t a significant presence; possibly enough to skew the results in some of those states mentioned before.

After all, this election was about choosing to stop a sort of pacific invasion; many showed up at the polling stations and sent a clear message: “We are already here, and we’ll stop your stopping us!”


The power of the pollsters


Notwithstanding their likely failing to take into account the amount of Latino support for Hillary, essentially almost all the polling organization were nonetheless predicting a Clinton victory from day one, and for the entire time.

This almost unanimous insisting on an expected result for many months, followed by an election where the opposite happens, should raise some eyebrows by itself.

There were lots of ups and downs, even suspicious in their volatile nature, but mysteriously the eventual suggested outcome was more or less always the same: Clinton wins.

There’s no way you can just get away with selling a precise narrative and then, after the events, explain away your mistake by pointing out that everything was within the margin of error.

That would mean it’s just a problem of statistical error. But random fluctuations don’t always skew in a single direction.

The gist of it is that there must be some systematic error(s), more or less intentionally put there.

Consider the recent Brexit vote: again the pollsters got it all wrong, insisting all the time that things were going in a certain direction, but then voters begged to differ.

2 for 2.

What those 2 votes have in common: an almost unanimous media landscape, insisting that there was absolutely only one possible correct choice. Civilization itself depended on a majority aligning itself with the ideas that almost all journalists and intellectuals deemed respectable, progressive and beneficial.

They assured you: join us, almost everybody is already on our side, safe for a few rubes who drag their feet and ruin the landscape for everyone else.

Opponents of the correct choice (Hillary or staying in the EU) were vilified. Doomsday scenarios were suggested in case of a Neanderthal Racist Victory.



Could you possibly think this alignment between the media propaganda and the polls is just a coincidence?

There are only 2 possible explanations:

  1. intentional tampering by at least some of the pollsters
  2. the “shy vote”, i.e. people choosing not to reveal to others that they are going to vote in a way that is subject to huge social stigma.

Any other explanation wouldn’t account for said constant alignment, I think.


Case 1 is a serious matter, and we shouldn’t be too hasty to accuse pollsters of essentially conspiring against their country.

Case 2 cannot be easily dismissed by saying it’s not the pollster’s fault. This kind of skew would reflect a huge problem in terms of media bias forcing the hand of responders, hence it would require polling organizations to admit the intrinsic lack of reliability of their polls.

Because of course the essence of an election is convincing the few voters in a few swing states that can shift allegiance to do so, hence a 1% shift from one side to the opposite marks the difference between a resounding and probably deserved victory, and a crushing defeat. You can’t say you didn’t miss by much in your predictions, if you were the ones fueling the narrative that Trump didn’t stand a realistic chance.

Face it: polls were useless as a predictor of the outcome.

But here’s the elephant in the room. How effective were they in swaying the vote, instead?

This is a quite important question that nobody seems to be interested in. It’s very hard to figure out and model or measure such an effect, granted. But you may admit it’s quite convenient for expert who dabble with public opinion avoiding to shed light on the subject. This is about their real usefulness, at least to some circles of power.


Here’s an interesting read: How polls manipulate voters no matter the results. It’s not just Trump or Brexit; this is a worldwide phenomenon.

Can you remember the last time the mainstream media reported a poll on social issues showing a conservative idea winning?


In the linked article you get many more examples of this strategy, where a proposal for gay marriage or getting men in women’s bathroom were crushingly defeated by voters, despite polls conducted before the vote indicating precisely the opposite!

This has nothing to do with coincidences, folks.

This is banking on the bandwagon effect to undermine the will of the people through conditioning.

And it works. Consider that while Trump became President, people were also voting in many states on some self-destructive marijuana legalization proposals (also euthanasia in Colorado). Some passed, some didn’t. People could mount an opposition to a barrage of unanimous media messages only up to a point.

Of course it’s not pollsters who change people’s minds about marijuana. It’s mostly peer pressure and a general attitude within the culture. But those two are mostly influenced by liberals in the media and academia. And the pretend scientific assessment of the public opinion by polling firms serves as a respectable façade to cover this pushing of a partisan worldview; a view that is sold as the new normal.

Experts, including pollsters, are not the principal actor; but they are useful to the media as adjuvants, to reinforce the effect of the pill you must swallow.

It’s like adding salt to a sugary dessert, or glutamate to a salty dish: the two flavors combined are stronger than the sum of their single effects.


How to rig a poll


There are many ways in which a pollster could skew the results in favor of their preferred outcome.

You could for instance pick and choose in which counties to concentrate most phone calls, based on their expected lean. You could be unsatisfied with a result and ask your team to increase the size of the sample while possibly changing some aspects of the process. You could more or less intentionally skew the weighing of the raw results when trying to make your figures more representative of the general electorate and/or prospective voters. You could put in some loaded questions to influence the answers, either by respondents caving in or by making them angry and self-selecting themselves out of the sample by hanging up.

Most of those tactics could also be arrows in the quiver of a well-meaning polling organization that is just trying not to produce outlier results, so that they may re-adjust their data to be part of a reassuringly homogeneous polling landscape. You don’t have to always assume a nefarious intent.


Let’s create a oversimplified model and say, for the sake of the argument, that by systematically overrepresenting a candidate by 5%, we end up obtaining a 3% increase in people voting for him or her. It’s perfectly feasible for a pollster to insist on touting higher figures all the time, but then, with election day approaching, gradually realign the presented data to the expected result by going down 2%. With a few random fluctuations and news-related ups and downs further complicating the picture, this action could easily go unnoticed; you can say, at most, that a candidate got a modest extra bump. Operating in this fashion you’d get the maximum conditioning effect through constant pressuring, but your performance would be eventually measured only by comparing your last day prediction to the actual election result: if you are good at your job, through the final realignment you’ll appear to have been very accurate.


Again, the “shy vote” could explain most of the discrepancy. But it could be intentionally exploited to paint an unrealistic picture anyway. We’ll get back to that.


Enter Fivethirtyeight


Now I’ll get to why I want to specifically pick on the website Fivethirtyeight and its creator Nate Silver.

It’s the latest fad in election coverage: websites that aggregate data from the various polls to estimate the likelihood of a certain outcome, plus an analysis of the various strengths and weaknesses of the candidates.

How’d you like a prediction of the future? Presto, get to the Fivethirtyeight website and have your fix, without the need of dabbling yourself with tea leaves or animal entrails.

It’s all scientific, based on accurate data analysis. And Nate Silver has a track record of predicting the result correctly in the past!

Or you can choose the RealClearPolitics website instead. Plus others.

Who are you to stand in the way of the Superior Judgment of the Experts?


The point is, they are all clearly biased. Guess what could possibly be their political alignment.



You guessed correctly. Their partisanship is so evident to outsiders, but clearly not them, that they end up looking like fools, insisting on results that fail to materialize.

Consider the following gem: here Silver was putting math against the pretense of the Trump campaign to succeed in getting the nomination by the middle of May.


And guess what? Trump won the nomination sooner than expected, “math” lost!

Incidents like this could have suggested Silver to be more cautious in the future, but no.

Their prediction of a Trump victory in the General Election remained for the entire time essentially between unlikely and highly unlikely. But with enough caveats that you couldn’t even blame them if something went wrong.


What is maddening to me about this relatable, nerdy wonk, is that he sounds sincere. 100% honest yet 100% biased.

Neither the cheerleader who’s always attacking the other side, nor the shady type that is out to deceive you.

A competent analyst using rigorous methods, it’s clear he values being recognized as an objective observer. The kind of attitude I imagine Anderson Cooper has: still always on the correct side of issues, but looking himself in the mirror with confidence, proud of his journalistic integrity that lets people on the wrong side of history get a fair treatment.

Fivethirtyeight doesn’t just collect and wrap together all the polls’ data; Silver also grades the reliability of the various polling firms. More importantly, he and his team provide commentary articles on the side.


In Nate Silver’s post mortem of the election he insists that pollsters can’t really be blamed for the collective mistake, because a 1% shift is making all the difference in the world… and such a small lead in key states is way under the threshold of statistical error. In other words, you couldn’t really expect us to predict the winner!

Then why are you in the business of predicting things you can’t predict? Why giving a semblance of scientific credibility to a random guess based on dubious polls that don’t possess the necessary granularity to begin with?

Why serve as the club with which the entire media circus could bludgeon into submission all those ignoramuses that didn’t accept the received wisdom, i.e. that nobody likes Trump and he doesn’t stand a chance?


Eating their own dog food, etc.


Easy: because he’s part of the same culture, he comes from the same fantasy world called academia where some students were so shocked by Trump’s victory they needed to postpone exams to spend some time mourning and coping with the tragedy… It’s an echo chamber.

No wonder in the same article I just cited, among many reasonable observations regarding statistics and demographics, he inserted this line, out of the blue:


America hasn’t put its demons — including racism, anti-Semitism and misogyny — behind it. White people still make up the vast majority of the electorate


You see where’s getting at? Whites were determinant and voted wrong. This triggers in him thoughts about racism and misogyny (despite Obama, elected by a slightly whiter America, already proving him wrong).

We are being served this tripe all the time.

I can expect political hacks and partisan pundits going there. But a sincere and seemingly prudent bespectacled nerd, the statistics expert, playing the part is too much. It drives me nuts.

Like when, with Election Day closing in, you are left to wonder how the Wikileaks scandal will impact Hillary, but you read Silver’s column pointing out that Trump is expected to lose ground in subsequent polls due to the sexual harassment accusations… Fine, you can comment on news items that could impact popularity… but when you overestimate some, and ignore others, you are not an analyst anymore. You are part of a propaganda machine.


It’s like when in a sports broadcast you get a commentary from an ex-player of a major team who’s covering them week after week, and he’s clearly on their side (incidentally against your team), while posing as an impartial color commentator. You almost expect a slip of the tongue to happen, where he’d refer to your team as “the other team”.


Speaking of sports, FiveThirtyEight is also into sports odds. And here we go again:



After the Chicago Cubs secured a historic victory, Nate was teased on Twitter by people pointing out their predictions made while joking are more accurate than when they are serious.


It’s not just about him.

Professor Wang from Princeton University predicted a resounding Clinton victory, with Trump unlikely to break 240 electoral votes. He described the race as “the most stable statistically since Eisenhower beat Stevenson in ’52.” He promised that if his mathematical model were to fail, he’d eat a bug. And after being proven wrong he did, on CNN.


Take also the guy from Real Clear Politics, getting confidently into Election Night with the following bold statement on Twitter, based on their consistently projecting Clinton as the winner:



This kind of attitude isn’t even about a strategy. It’s a reckless, counterproductive move that exposes those “experts” to ridicule and can be explained only with their blind belief in their own narrative.


Leverage in influence


What is the purpose of those aggregators? Remember: most polls are probably biased. Many are sold as a means to pressure people into complying.


Let’s take the final FiveThirtyEight forecast.

Instead of some unimpressive Clinton lead with 48.5% vs. 44.9% for Trump, you can get into number-churning and declare that those percentages translate in a 71.4% chance for Clinton to win!

Now that is impressive!

This kind of figure is much more volatile and unpredictable than the underlying data, pretty much like what happens with the Stock Market, where the Futures and other derivatives are rightfully seen as inherently more risky and unreliable.

It’s a mad world:  a derivative is dangerous and closer to gambling, if you are a money manager dealing with financial assets; and yet a similar object becomes a highly reliable indicator of the upcoming events, when elections are concerned. Never mind your doubts about the gatekeepers.


  • False certainty. Most people aren’t well versed in math. This apparently clear message of a very high chance of victory (almost 3 in 4 in the above example) could convince many, happy to leave the technical details to the experts.
  • Experts shielded from criticism. Nate Silver could count on the fact that he was right in 2012, when many Conservatives were pointing out that some polls appeared to be skewed in favor of Democrats, so they insisted Romney was still in the race. See what happened? On one corner you’ve got the expert who uses math and was proved right time and again in the past, against the usual, predictable objections. On the other corner, unqualified people who desperately try to question the data they don’t like. This is a pretty powerful message to shame critics into silence.
  • Dynamic manipulation following the news cycle. It’s not just the absolute value of the lead that convinces people. Significant changes in support send a powerful signal to the public. Let’s say that some media entities want to give prospective voters a good yank in the right direction, banking on a somewhat juicy piece of news, like a scandal or a debate performance. The message to convey is simple: “Look, this changes everything! People are reevaluating their decision and now supporting Clinton! What about you?” A few targeted polls could easily give cover to this narrative, amplifying any swing in the “right” direction. If everybody seems to agree… Well, a poll aggregator could further increase this effect! Consider the case of what happened when the Republican National Convention ended and was immediately followed by the Democratic Convention (Figure 4 down below). On July 30 Trump chances according to FiveThirtyEight were at 49%, after a long surge; 8 days later they went down crashing at 12.4%. What happened? Nothing. Just the two parties congregating, a few speeches being tossed around. I can understand supporters being galvanized by a convention. But that’s the kind of loyal voter that was already set on a candidate. Let’s postulate that a few people were swayed by the event. There could be some change. No way such a dramatic effect! Of course Nate Silver could protest that he clearly stated that the galvanizing effect of conventions is probably temporary, but with this “please read the fine print” prudence combined with the impact of his graphs he’s both technically unassailable and effective in reinforcing the desired message. Notice also how the necessary realignment to more realistic figures, involving most polls in the last few days, ended with a final spike in favor of Clinton, just to send the message that the momentum was again on her side (and you can always justify such shifts picking some appropriate fresh news items).
  • Power to the weasels. Under normal circumstances you could hope to be able to focus on a good poll coming from a reputable institution and run with it. But thanks to aggregators, the combined result of all the extant polls is an irresistible temptation: beyond creating a herd effect and discouraging outliers, those averages (and the crazy “Chance of Winning” derived from there) tend to become the only meaningful figures to consider. This means you lose the ability to throw away less reliable polls, for instance polls that have clearly been commissioned by a party for the purpose of swaying the public perception. At this point, especially in close races at state level, where there’s less data available, a single rogue poll could completely change the outlook. By aggregating the data you are maximizing the power of the least professional pollsters. It’s an incentive to cheating. How suspicious is for instance a poll that comes up quite close to the election, all of a sudden reversing the trend and giving Clinton a 12% advantage?

Figure 4 – FiveThirtyEight “Chance of winning” virtual indicator through time. Notice the blue line always on top and the wild swings.

I think I made my point quite clearly: this reliance on artificial constructs, dubious extrapolations from unreliable polling data, that only serve to reinforce the preferred media narrative, should stop.


This pretend reliance on “math” is also ultimately harmful to the image of actual scientific endeavors. 

Today there are enough popular misconceptions about science without the need of further fueling the fire with ideologically-charged abuses of the role of Expert High Priest of Knowledge.


But what about the polls themselves? In closing, let’s examine a few disturbing examples.


Polls you can’t trust.


Exhibit 1: the Michigan Primary Case.

On March 8, Bernie Sanders defeated Hillary Clinton in the Michigan Democratic Primary, by a narrow 1.5% margin.


As you can see from the screenshot, pollsters predicted on average a Clinton victory by more than 21%! That’s missing your target by a mile, sharpshooters!

But wait, there’s more. Here’s a Podesta Wikileaks email worth your attention. Notice: no one to date has ever disputed the authenticity of any of the Wikileaks documents.




You can choose what you want to believe. I don’t want to get sued. Either

A. John Podesta, chairman of Hillary Clinton’s Campaign, was extremely pessimistic, ignoring the scientific polls and just having a hunch, or maybe resorting to baseless anecdotal evidence, but the experts turned out to be spectacularly wrong.


B. Podesta knew that the public polls were meant to influence the public, but internal, unreleased polls were painting a completely different picture…


It seems that, whatever the net effect of the narrative pushed through those public polls, they almost succeeded in reversing the result. Don’t tell me that the public opinion can’t be influenced by such numbers. +21.4%!


In the grand scheme of things this episode may seem insignificant. But it’s an important clue about reality being far more complex than what emerges.

(Sweet justice: Michigan is one of the 3 key states where Trump won the election by targeting blue collars: obtaining the votes of many Sanders supporters.)


Exhibit 2: Arizona and the interviewers shooing shy voters.

Trump eventually won Arizona with a 5% margin.

Here’s an interesting research article about an October 19 Arizona poll conducted by the Morrison Institute for the newspaper Arizona Republic. This poll purported to show that Clinton surged to a commanding lead with a 5% margin on her favor. That’s a significant result, if true, because it would show a surprising weakness for Trump, losing ground even in a traditionally Republican state. But, as we now know, it wasn’t true at all!

How could they pull off such a trick? Easy. The sample of people who completed the survey contained 413 self-identified Democrats and only 168 Republicans. In a red state, they came up with a sample like this:

58% D

24% R

19% I (Independents)

In a state where you’d historically always get a ceiling for a Democrat presidential candidate at 44%, including the votes from Independents…

Can we explain such a ridiculous result? Well, it could very well be that a significant portion of respondents were self-excluding because they were very hostile to pollsters in general. You could easily trigger their reaction, if you want to, and reshape the result as desired just by choosing your wording.

But even assuming the people at Morrison Institute were absolutely innocent victims of an intrinsic problem with modern survey audiences, they should have recognized the issue and refused to publish such a lopsided result! Ignoring the bias and running with it is akin to choosing to be part of a deception. Unless you are absurdly incompetent.


Exhibit 3: the Clinton Pollster and the Maddening Questions.

The starting point for this case is again a research article from Sundance of The Conservative Treehouse. Here I’m adding some ideas on my own.

Scenario: there’s a 2005 tape where Donald Trump is making lewd remarks about easy women around him, boasting about his jumping at opportunities (kissing them) without even waiting; he doesn’t know all of this is being recorded. You probably already listened to the tape.

Trump’s enemies (who?) sit on this juicy material for years, then wait for the perfect moment to release the recording to the media to inflict the maximum damage possible. On October 7, almost 1 month before the election, when there’s little legroom for the GOP to change strategy (let alone substituting their candidate), this compromising Access Hollywood tape is revealed. Predictably, it becomes a media sensation.

On October 8 and 9, just one day after this totally unexpected scandal, Hart Research conducts a poll on behalf of NBC where Clinton is said to have surged to a whopping +11% over Trump. Lots of media coverage. Many people at this point started to think it was over for Trump.

Despite the close proximity to the tape leak, the poll contained 4 detailed questions specifically targeting the tape scandal, as you can see from this screen grab:

Hart Poll 8 October, 4 anti-Trump questions on the Access Hollywood Tape

In short here’s what they mean.
Q11: should Trump simply give up and/or be disqualified from running?

Q12: please dwell a little more on the subject and contemplate your uneasiness defending him or firmness in condemning him.

Q13: could you still support him because this is just old stuff, or not?

Q14: should the GOP self-detonate and thanks to this brilliant trap, ignoring any other issues or considerations, essentially give the presidency to Hillary Clinton already?


You see, the interesting thing is that within the entire survey you cannot find any other questions, except those 4, that isn’t either a declaration of personal political lean/candidate approval/vote intention, or just collecting demographic data for statistical purposes.

Those 4 questions are the only ones dealing with the respondents’ specific opinions on any subject. There were no questions about other scandals or important issues concerning the future on this poll. Focus on the tape (for maximum impact?)!

12 and 13 are just two ways of asking people to judge Trump. 11 and 14 are unthinkable outside of the mad media storm of those 2 days: they are about admitting defeat and giving up. Who could possibly take seriously the idea of a candidate withdrawing a month from the election and consigning the country to the opposite party? In fact, only a short term deception effort could pretend to push this idea on people’s minds.

You may retort that after all that tape was the subject of the day, deserving special attention, and maybe the quick mobilization of the pollsters isn’t a sign of coordination with the leakers.

But you can effectively capture public opinion only when a piece of news has already been assimilated, after a few days. Here almost 1 in 5 respondents said they didn’t yet know enough of what happened.

Rushing to get the public’s reaction is about trying to amplify the effect of the juicy news bite. Those questions were actually breaking the news to a good chunk of the surveyed and framing the issue for them.


Now consider: what if those questions, pounding on a very surprising and painful problem for Republicans, were asked at the beginning of the survey, thus setting the mood for the entire interview? Their being numbered 11 to 14 doesn’t guarantee they weren’t posed before the others.

In any case, it’s quite possible that some of the surveyed hanged up the phone in anger. Again, self-selection working in favor of a desired bias.

Could you really be surprised if the result of the poll is (we now know) not realistic and favoring Clinton?


As per The Conservative Treehouse link, the President of Hart Research Associates, Geoff Garin, is working as a strategic adviser for a Hillary Clinton’s SuperPAC, Priorities USA, that is also funding them.

See? Hundreds of headlines declaring Trump in deep trouble, from the most prestigious newspapers and websites, all supported by a Clinton Campaign poll that looks fine tuned to obtain a skew in the answers.

The pollster Hart Research is now still rated at B+ by FiveThirtyEight: not the cream of the crop, but pretty reliable.




  • It’s abundantly clear, I think, that most polls can’t faithfully represent the actual sentiment of the population; there’s an increasing number of people who aren’t falling under the pollsters’ radars or don’t want to participate in a survey, either out of contempt or to avoid judgment.
  • On top of that, there’s a natural tendency of election results to depend on narrow margins, making the act of trying to predict the result mostly pointless.
  • Many pollsters are not reliable in their approach, with a behavior ranging from participating in a herd effect skewing of the data, to cooking the numbers with unethical, deceptive adjustments.
  • Most pollsters appear to be paid by, or sharing the ideology of, the same media entities that invariably push the narrative. This stinks.
  • We can safely assume that skewed polls could significantly influence the results, but conveniently there’s little public awareness of the issue.
  • Websites that aggregate and further elaborate polls data bring an aura of undeserved extra credibility to a questionable data base. Garbage in, garbage out. They also reward the worst offenders that can single-handedly skew the average.
  • “Chance of winning” prediction models in their apparent simplicity further amplify any error or misdirection; paradoxically they sound more credible while pushing a sense of false certainty in the public. An analyst like Nate Silver, with all his clout, could claim that his persistent estimate of a less than 1-in-6 chance for Trump in the crucial days of the electoral battle can’t be criticized as misleading; that of course the final result was very uncertain after all; that he was more prudent than other experts in giving a chance to Trump, and his final 28.6% isn’t that bad. He could essentially pat himself in the back, while being at the top of a food chain of deceptive survey results; so much so that a victory that was probably in the cards for months has been received by a stupefied public as a total shock.


This should stop. The polling experts’ poor showing at last election should become a wake up call.

Nah, I’m not holding my breath.







Leave a Reply

Your email address will not be published. Required fields are marked *