Swedroe: Foxes More Right Than Hedgehogs

Philip Tetlock, who teaches psychology, business and political science at the University of California, Berkeley, is also the author of “Expert Political Judgment: How Good Is It? How Can We Know?” The book, which was published in 2006, discusses the findings of his 20-year study, the first scientific study on the ability of experts from various fields to predict the future.

Tetlock found that the so-called experts who make prediction their business (those who appear frequently as experts on television and talk radio, are quoted in the press and advise governments and businesses) are no better than the proverbial chimps throwing darts.

Foxes Vs. Hedgehogs

Tetlock divided forecasters into two general categories: foxes (who draw on a wide variety of experiences and for whom the world cannot be boiled down into a single idea) and hedgehogs (who view the world through the lens of a single defining idea). The terms originate from a famous essay, “The Hedgehog and the Fox,” by Isaiah Berlin. Following are some of Tetlock’s most interesting findings:

  • What distinguishes the worst forecasters from the not-so-bad forecasters is that, while hedgehogs are more confident, they are wrong more often than foxes. Overconfidence is an all-too-human trait. In other words, the hedgehog’s defining “big idea” increases confidence, but doesn’t improve foresight. In fact, it hinders it.

  • What differentiates foxes from hedgehogs is that foxes rarely perceive events as being as bad as they appear at the trough or as good as they look at the peak.

  • Optimists tend to be more accurate than pessimists.

  • What experts think matters far less than how experts think. We are better off turning to the foxes, who know many little things and accept ambiguity and contradiction as inevitable features of life, rather than turning to the hedgehogs, who we know reach for formulaic solutions to ill-defined problems.

  • It makes virtually no difference whether forecasters are Ph.D.s, economists, political scientists, journalists or historians, whether they have policy experience or access to classified information, or whether they have logged many or few years of experience in their chosen line of work. In fact, the only predictor of accuracy was fame, which correlated negatively with it. In short, the most famous forecasters (those more likely feted by the media) made the worst forecasts.

  • Beyond only a stark minimum, subject matter expertise translates less into forecasting accuracy than it does into overconfidence (and the ability to spin elaborate tapestries of reasons for expecting “favorite” outcomes).

  • Like ordinary mortals, experts fall prey to the hindsight effect. They tend to claim after the fact that they knew more about what was going to happen than they actually did beforehand. A systematic misremembering of past positions may appear strategic, but the evidence indicates that people sometimes truly convince themselves they “knew it all along.” Hindsight bias causes overconfidence.

  • The “marketplace of ideas” can fail because consumers may be less interested in the dispassionate pursuit of truth than in buttressing their prejudices.

A Hedgehog’s Perspective

Tetlock also provided this important insight about hedgehogs: They may be playing a different game. They are fighting to preserve their “reputation in a cutthroat adversarial culture. They woo dumb-ass reporters who want glib sound bites. In their world, only the overconfident survive and only the truly arrogant thrive.”

He observes that the same self-assured, hedgehog-style reasons that suppress forecasting accuracy and slow belief-updating translate into attention-grabbing, bold predictions that are rarely checked for accuracy. Unfortunately, the media seeks out hedgehogs who confidently tell tight, simple and clear stories; who can pile up reasons for why they are right using terms such as “furthermore” and “moreover,” and who can grab and hold audiences that tend to find uncertainty disturbing.

These hedgehogs almost never consider other perspectives (because that would confuse and complicate things). Foxes, alternatively, don’t do as well with the media, even though they make better forecasters.

Sadly, Tetlock concluded: “No matter how unequivocal the evidence that experts cannot outpredict chimps or extrapolation of algorithms, we should expect business to unfold as usual: pundits will continue to warn us on talk shows and op-ed pages of what will happen unless we dutifully follow their predictions. We, the consumers of expert pronouncements, are in thrall to experts because we need to believe in a controllable world and have a flawed understanding of the laws of chance. We lack the will power and good sense to resist the snake oil products on offer. Who wants to believe that on the big questions we could do as well tossing a coin as by consulting accredited experts.”

Superforecasting

While Tetlock was highly skeptical of the ability even of experts to predict the future, he wasn’t prepared to dismiss all prediction as an exercise in futility. And so in 2011, he, along with decision scientists Barbara Mellers and Don Moore, embarked on what is called The Good Judgment Project (GJP).

The GJP is run in collaboration with the U.S. Intelligence Advanced Research Projects Activity and was a participant of the organization’s Aggregative Contingent Estimation program. Participants were asked to put specific odds on the likelihood of certain events, such as whether or not there would be a fatal confrontation between vessels in the East China Sea within the next six months.

Tetlock’s latest book, “Superforecasting: The Art and Science of Prediction,” which he co-authored with Dan Gardner, is about the attempt to identify superforecasters. The book also explores—if it is indeed possible to identify these superforecasters—whether or not we’re able to identify the attributes that make them successful, attributes that could then be taught.

While stating that they are “optimistic skeptics” on the issue, Tetlock and Gardner set the stage with this warning: “Open any newspaper, watch any TV news show, and you find experts who forecast what’s coming. Some are cautious. More are bold and confident. A handful claim to be Olympian visionaries able to see decades into the future. With few exceptions, they are not in front of the cameras because they possess any proven skill at forecasting. Accuracy is seldom even mentioned. Old forecasts are like old news—soon forgotten—and pundits are almost never asked to reconcile what they said with what actually happened.”

Conviction Over Accountability

The reason for this lack of accountability is that it would ruin the game. Even worse, accuracy is seldom determined after the fact, and is “almost never done with sufficient regularity and rigor that conclusions can be drawn.” Since the public and governments don’t demand evidence of accuracy, there is no measurement.

Tetlock and Gardner went on to add: “The one undeniable talent that talking heads have is their skill at telling a compelling story with conviction.” They go on to note that “talking heads” who have this skill can become wealthy through “peddling forecasts of untested value to companies, governments and ordinary people who would never think of swallowing medicine of unknown efficacy and safety but who routinely pay for forecasts that are as dubious as elixirs sold from the back of a wagon.”

The GJP produced what is, perhaps, a surprising outcome. A small group of amateur superforecasters showed superior forecasting skills (about 30% better) relative to professionals within the intelligence community who had access to all the resources of the government agencies supporting them, as well as to classified information.

Superforecaster Characteristics

Tetlock and Gardner identified some of the traits common to superforecasters who showed superior forecasting skill, traits that the authors found to be the best predictors of forecasting accuracy. They found that the best superforecasters tend to be:

  • Introspective and self-critical.

  • Open-minded and apt to value others’ opinions, draw evidence from many sources and seek out evidence that undercuts their forecasts. They subscribe to the view that beliefs are just hypotheses to be tested, minimizing the risk of confirmation bias.

  • Careful and not prone to grabbing onto the first plausible explanation. They are motivated skeptics who first aggregate data and then synthesize it into a conclusion. This is difficult for most of us to do, as we are prone to relying on “tip of the nose” perspectives.

  • Intellectually curious.

  • Cautious, viewing nothing as certain.

  • Humble, understanding that reality is infinitely complex. Thus, they are not wedded to certain ideas.

  • Focused and determined to keep at it, however long it takes.

  • Committed to self-improvement.

  • “Foxes” who talk about possibilities and probabilities, not ideological “hedgehogs” who tend to talk about certainties. They don’t have their egos invested in their forecasts, making it easier to recognize errors in judgment and adjust them.

  • Able to avoid “meant-to-happen” thinking. The more one was inclined to have a fatalistic view, the worse the forecasts were.

  • Very comfortable with numbers.

  • Skilled at distinguishing between what is known and what is unknown and able to explore similarities and differences between their views and others while paying special attention to prediction markets and other methods of extracting “wisdom from crowds.” They then synthesize their views into a single vision and express their view as precisely as they can, using a finely grained scale of probability. They update the forecast regularly, carefully balancing new information with the old, because updated forecasts are likely to be more accurate. When facts change, they change their opinions.

Another important finding from the book was that “diversity trumps ability.” The authors discovered that teams outperform, but only if they are able to aggregate different perspectives. It’s the diversity of perspective that makes the magic work.

Superforecasters Are Not Infallible

Tetlock and Gardner found that there was some persistence in performance among superforecasters as a group, although there was turnover among some of the top performers. They concluded that “we should not treat the superstars of any given year as infallible,” adding that “luck plays a role and it is only to be expected that the superstars will occasionally have a bad year.”

Another important finding was that the accuracy even of superforecasters declined toward chance five years out. It’s amusing to consider that finding in light of how many organizations—including some I’ve worked for—put so much weight on five-year plans. The conclusion we should draw is that, if you have to plan that far into the future, you should be planning for surprises, and for “adaptability and resilience.”

Tetlock and Gardner have produced a book that belongs on everyone’s must-read list, especially the must-read list of investors who tend to pay attention to and place value on the forecasts of the hedgehogs so often found in the media in general, and in the financial media in particular.

No Superforecasting In Mutual Fund Space

At least up to now, there’s little to no evidence that superforecasters can add value, after the expense of their efforts, in terms of generating alpha. You would think that if this wasn’t the case, and superforecasters did add value, we would observe evidence of it in persistence of performance among mutual funds.

However, there’s no real evidence of that being true. And if superforecasters could foresee the future better than mere mortals, you would think it would show up in, for example, the returns of hedge funds that rely on macroeconomic forecasts. However, the HFRX Global Hedge Fund Macro/CTA Index shows the following results. Through Oct. 22, 2015, the hedge fund index is down 1.6%, while Vanguard’s S&P 500 (VOO | A-99) was up 1.3%, an underperformance so far this year of 2.9 percentage points.

In 2014, the hedge fund index gained just 5.2% versus a return of 13.5% for VOO, underperforming by 8.3 percentage points. In 2013, the hedge fund index lost 1.8% versus a gain of 32.2% for VOO, underperforming by 34 percentage points (ouch). And in 2012, the hedge fund index lost 1.0% versus a gain of 15.8% for VOO, underperforming by 16.8 percentage points.

Perhaps Wall Street’s “gurus” will be able to learn from the GJP study and improve their results. But it’s also likely that their competition will be getting tougher at the same time, because knowledge about how to become a superior forecaster is now well known (thanks to Tetlock and Gardner). The only thing we can be sure about is that Wall Street will keep peddling the idea that their crystal ball is clearer than the competition’s.

And they will be charging you plenty for believing in that story, whether or not it turns out to be true. Keep this warning from decision scientist Baruch Fischhoff in mind whenever you’re tempted to act on some hedgehog’s forecast: “When both forecaster and client exaggerate the quality of forecasts, the client will often win the race to the poorhouse.”


Larry Swedroe is the director of research for The BAM Alliance, a community of more than 140 independent registered investment advisors throughout the country.

Permalink | © Copyright 2015 ETF.com. All rights reserved

Advertisement