If the Brexit vote taught us anything, it is that prediction science is a lot harder than even Yogi Berra said. It's tough to make predictions, even about the present.
Brexit and Trump -- and Twitter's success (launched 10 years ago this month) -- stupefied the overwhelming majority of experts in politics and technology, respectively. Today Twitter is the public square for announcing royal births -- as well as where both Presidential candidates revealed their running mates.
What we think all the experts know to be true now may, in fact, be false.
This matters. Big Data and artificial general intelligence companies, the new darlings of Silicon Valley and organizations concerned with international security, should pay heed. The Brexit vote proved that assumptions of fact, the springboard for all deductive or inductive reasoning-- are heavily prone to human error.
On Brexit, the usually accurate prediction markets for binary events, where people invest their own money on the outcome, were wrong. Expert influencers - from Christine Lagarde of the IMF to Mark Carney of the Bank of England to Nobel Laureate economist Paul Krugman - did not influence. The average of public polls, polls usually being reliant on non-random survey respondents - that is, self-trained political pundits - were wrong, not just right before the vote, but through many of the prior months. Behemoth financial institutions, privy to sophisticated private research, did not take trading positions that would suggest any information edge.
That Leave would win was, especially among academic and media elites, as preposterous an idea as the early notion that Mr. Trump would win the Republican nomination. Our own firm's data, through independent examination by Jonathan Mellon of Oxford University and Christopher Prosser of the University of Manchester, put forth analysis that disengaged voters 40 and under would decide the vote. Thus they concluded, one day prior to the vote: "Our results suggest that the Remain campaign is right to worry about weak turnout among young voters."
Why did disengaged young voters not storm to the ballot box in the numbers they 'ought' to have? Could machine algorithms really have predicted the results?
Socrates and the Stoic philosophers were correct. The only true wisdom is that you know nothing. From that premise of humility stems the basis of the pursuit of reason, and, ultimately, intelligence. 'Intelligent' machines -- spirited forth through the current generation of the Internet of Things and the so-called Semantic Web -- are only as intelligent as the humility and self-doubt of those who write the code.
Cognitive science, mathematical modeling and neuroscience are now maturing, with billions of dollars in capital being poured into these efforts -- to predict trends, preferences, and events in every part of the world. The promises this heralds are tremendous. Imagine knowing what people in every village in the world think of ISIS, or predicting the timing of the murderous intent of those inspired by its evil.
Are, therefore, the claims of Big Data and predictive analytics companies hopelessly imperfect given what Plato, Socrates and Aristotle always knew about the limits of prediction? It is too early to tell. We are at the frontier of a generation of prediction science that proselytizes blending manifold data sets, from real-time weather trends to real-time voter intent to reliable data sets on anti-immigrant animus -- all of which may have helped to understand the British voter, particularly the feelings of the disengaged young voter and why she may have opted to stay home and not vote.
Yet we humans are a self-assured lot; we assume ever more data sets will lead to the Holy Grail of prediction -- while history teaches the opposite to be the case.
The 9/11 Commission pointed out the grave risk of suffering from a failure of imagination. In the field of prediction, this manifests itself in the failure to probe for potential confounding variables, and, ceteris paribus, our intellectual reach shall exceed our grasp. Not understanding confounders that have unintentionally crept into enormous data sets can result in flawed conclusions. Early HIV data sets, for example, did not take into account the influence of intravenous drug use.
The next era of prediction, then, lies in factoring humility into our models. Without this, machine learning will only take us so far. This time it was Brexit that the machines failed to predict. Next time it could be an eventuality more serious.
Follow HuffPost Canada Blogs on Facebook
Follow Neil Seeman on Twitter: www.twitter.com/RIWIdata