Vingean Uncertainty and Macroeconomics

Leave a comment
Macro / philosophy / reading

There exits a thought puzzle that has riddled sci-fi writers for decades, perhaps even for centuries now.

The quote below was put forth by notable Sci-Fi author Vernor Vinge:

Assume you’re writing about an alien species that is presumably smarter than our species. How do you effectively plan out their actions as if you could do this, you would be as smart as this species?

Vinge is an acclaimed writer and his collection of short stories continue to inspire thinkers globally.

For the backstory behind Vingean Uncertainty:

Of course, I never wrote the “important” story, the sequel about the first amplified human. Once I tried something similar. John Campbell’s letter of rejection began: “Sorry—you can’t write this story. Neither can anyone else.”… “Bookworm, Run!” and its lesson were important to me. Here I had tried a straightforward extrapolation of technology, and found myself precipitated over an abyss.

It’s a problem writers face every time we consider the creation of intelligences greater than our own. When this happens, human history will have reached a kind of singularity—a place where extrapolation breaks down and new models must be applied—and the world will pass beyond our understanding. — Vernor Vinge, True Names and other Dangers, p. 47.

In a nutshell, how can Sci-Fi writers portray the actions of a race smarter then them? If they knew, surely that would result in a paradox as the writers would be equivalent in intelligence to the “smarter” species. A recent example of this phenomenon would be the in the AlphaGo experiment with the Go player, Lee Sedol. Elite players did not understand AlphaGo’s moves at the time, but it was crystal clear to the professional players after the game ended that they were dealing with a superior intelligence. Something had abruptly changed in their worlds forever.

But this idea contains much greater significance to our world than a game like Go, despite how complex it might be.

We can apply this idea to the field of macroeconomics and the overarching economy. In a system like macroeconomics, it becomes impossible for us to properly assess the second or even third order results of our decisions. While academics state that most of the field is the science, macroeconomics is not readily falsifiable. The same action can lead to bewildering effects when seen in their aftermath. It could be the madness of the crowds like the Dutch outrageously priced Tulips, otherwise known as Tulip mania.

Indeed, there is also the wisdom of the crowds where the law of large numbers can converge to roughly the right answer. This was the case with the weight of the cow guessing game. For the most part, the economy is reminiscent of the latter situation: most stocks trade at their fair value. In other words, this is the weak efficient markets hypothesis.

Nassim Taleb would also concur that this happens with the lack of risk-taking in the system. As he says in Antifragile, Roman engineers had to live underneath the bridge they built with their families for a night. In this way, should chief economists of the Fed and ECB  face the penalty of their role in engineering the gears of their nations economy? Why has this not happened in Western economies?

One could say that it is because we pessimistically accept bubbles and their ensuing financial recessions – it’s perceived human nature. Maybe what it will take to “solve” the bubbles will be a radical shift in terms of how finance is structured: computers leading the way instead of humans. Currently, we borrow, save, and invest as retail consumers telling the computer what to do. I want to invest in Apple – *click*.

In the not-so-distant future it could be the opposite. Computers tell us to make a certain trade and we use Facepay to concur this exchange. Without humans in the picture, an argument could be made that many of our anthropological bubbles could cease to exist.


I’ve always been fascinated with econophysics: the confluence between physics and economics.  The physics envy is real. We can deduce the angular momentum of celestial bodies and calculate the electrical charges of circuits. These are facts. Macroeconomics is unlike this. For instance, there is a trade equation in the field built off the equation for gravity. One central problem that was highlighted after 2008 was that many of the equations were too precise, the variables too calculated.

There is plenty in the field of macroeconomics that is still up for debate. We are still unsure whether the New Keynesian or Real Business Cycle models are better mental models for our world – now many say its a close hybrid between the two.

People are people. Indeed, chief economists trying to direct how people spend their money with interest rate manipulation realize this issue again and again. Whether we are in 1873 with the railroad crash in the United States or the 1720 South Sea bubble in the UK, this time in 2008 was no different. We now get Anna Karenina’s law of macroeconomics: all successful economies look alike, all failing economies fail in their own way. The largest way they do so is failing to account for the irrational behaviour, a mimetic symptom, of their own populations.

Therefore to stop repeating problems, it could take a new breed– general artificial intelligence – to stop the financial collapses we believe are so pervasive. An A.I. can learn exponentially faster than that of any human and can learn from the believed past causes of bubbles. It definitely would not be surprising to see an A.I. recommending financial policy as a pseudo Federal Reserve chair decades from now, assuming the necessary advancements are made in deep learning.

This defeats the Vingean uncertainty aforementioned because it’s not a privileged elite making decisions about the way we use our money. Rather the A.I. is actually smarter than us. It can predict things we cannot with our relatively limited prefrontal cortices.

Who knows what the future holds and if we see yet another AI winter. Yet, I am bullish on the notion that AI can bring massive amounts of positive change to the world as long as we build it cautiously. With this, the staccato rhythm of bubbles and crashes done seemingly without our knowledge could come to an end.

(Funnily enough Vernor himself predicts that after humans create superhuman intelligence, we will thereafter cease to exist.)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s