Joakim Book

Freelance writer and globetrotter with an unhealthy addiction to financial history and all things money. #future #optimism #monpol #climate

Jan 7, 2020
Published on:
2 min read

Imagine mailing a stock prediction for next week to ten thousand unknowing recipients. Half of them contain a prediction for rising prices of some instrument, and the other half contain a gloomy forecast for the same stock. The following week you repeat the same exercise with the half for whom you happened to be right. After ten weeks, your initial sample has dwindled to nine people — but for them, you increasingly look like a stock market genius. For ten weeks straight you correctly predicted the movement of a given stock.

This problem is known variously as the Baltimore stockbroker scam, the “file drawer problem,” or “publication bias,” all illustrating that with large enough samples even chance will look suspiciously nonrandom. A chosen outlier — ten correct calls in a row — may look impressive to the selected nine who received them, but you don’t know how many chances the broker had to achieve those results. With large numbers of people trying for long enough, you eventually end up with a few people with outrageous results.

You can perform similar tests with coin flipping, that quintessential game of chance; in a sample size of five thousand coin flippers, the chances that somebody emerges with ten heads in a row is over 90 percent. Or the famous Birthday Problem, where the number of strangers needed for a 50 percent probability that two people in a room share a birthday is no more than twenty-three people.

Examples like these abound, and they should make us very skeptical of fancy results that may have arisen from randomness (see my two recent Mises articles on Nassim Taleb). Yet the relationship between human intuition and the computational powers offered by the automation of tasks previously thought uniquely human is intriguing.

What if machines can do x better than humans? What happens when they take over our judgments and decision-making? And the spiral towards dystopian I, Robot scenarios are all but guaranteed (yes, Mr. Musk especially).

Not Everything Is Random

Take Jim Simons, the successful hedge fund manager whose Medallion fund uses machine learning and quantitative investments to outperform markets. According to Greg Zuckerman’s latest book on Simons, The Man Who Solved the Market, Medallion returned 39.1 percent, annualized since 1988, whereas other famed investors like Peter Lynch, George Soros, or Warren Buffet have "only" achieved 20–32 percent returns over their respective years of operations.

Randomness and exuberant tech worries are merely two reasons why the Austrian relationship to such machine-learning, quant-reliant investors like Jim Simons is ambivalent. We may quibble over the precise number and the randomness involved, but it’s undeniable that Simon has done something with machines and markets that few thought was possible.

On the one hand, it's impossible not to admire Simons's dedicated entrepreneurship: "There's a pattern here; there has to be a pattern," Zuckerman recounts Simons frequently saying to his employees.

Unlike many mainstream financial economists, Austrians aren't beholden to Efficient Market Hypothesis-style reasoning and shouldn't necessarily have an aversion to the idea that a clever hedge fund manager could outperform the stock market. After all, we don't object to an entrepreneur's use of knowledge and technology to deliver superior consumer products. In a sense, beating the financial markets isn’t conceptually different than outsmarting your rivals in other competitive markets.

Simons and most of his foundational collaborators — James Ax, Robert Mercer, or Sandor Straus — never "believe[d] that the market was truly a 'random walk,' or entirely unpredictable," writes Zuckerman. Simons clearly "unearth[ed] new ways to make money."

On the other hand, Simons's approach to markets is a purely technical, quant-driven one, with no connection to underlying economic variables. That should raise flags for most Austrian intellectuals. Sandor Straus, described by Zuckerman as a "data guru" in the early days of Simon’s venture, was an obsessive data collector, on occasion physically digging up decades-old interest rates at the New York Fed in order to feed Medallion’s trading algorithm even more data. Admittedly, that an interest rate or closing price twenty years past may have relevance for tomorrow's share prices is far-fetched.

The underlying premise in Simons's trading strategy was the consistency of trading and market price reactions; he allegedly discovered that trading patterns "might repeat, under the assumption that investors will exhibit similar behavior in the future." Simons's conviction and underlying trading model is that "historic patterns can form the basis of computer models capable of identifying overlooked and ongoing market trends, allowing one to divine the future from the past."

Such math-embracing certainty convictions clash strongly with the antistatistical impetus in both recent and earlier Austrian literature. For instance, in chapter 16 of Human Action, Mises writes:

It is necessary to emphasize these facts again and again because it is customary nowadays to play off the statistical elaboration of price data against the theory of prices. However, the statistics of prices is altogether questionable. Its foundations are precarious because circumstances for the most part do not permit the comparison of the various data, their linking together in series, and the computation of averages. Full of zeal to embark upon mathematical operations, the statisticians yield to the temptation of disregarding the incomparability of the data available. (p. 328, emphasis added)

Furthermore, in the 1985 preface to Mises's Theory and History, Rothbard aggressively pushed the same point:

It is impossible to test [economic theory] in any way by checking its propositions against homogeneous bits of uniform events. For there are no such events. The use of statistics and quantitative data may try to mask this fact, but their seeming precision is only grounded on historical events that are not homogeneous in any sense. Each historical event is a complex, unique resultant of many causal factors. Since it is unique, it cannot be used for a positivistic test, and since it is unique it cannot be combined with other events in the form of statistical correlations and achieve any meaningful result. (p. xvii, emphasis added)

It is hard to reconcile those strong methodological points with an entirely data-driven, quant investment strategy. The picture that emerges from Zuckerman’s great investigation into this man who was characteristically shy of the spotlight is that Simons himself doesn’t know what accounts for his outsized returns; he merely programmed a machine to spot patterns, learn from them, and keep adjusting trades using huge amounts of leverage — a strategy that, so far, keeps making him money.

Strictly speaking, Mises and Rothbard object to using statistical inferences in the validation of theory; true praxeological theory can only be logically derived from first principles. Granted, Simons and his team of data mining programmers aren’t doing economic theory — like other entrepreneurs, they are merely out to make a buck or two.

Most of the time the team wasn’t even aware of what it was trading. Zuckerman recounts an iconic instance where one of Simons's comanagers presented the firm to outside investors and gave an example of buying or selling Chrysler shares — except that Chrysler had been acquired by Daimler several years back and wasn’t trading as a stand-alone security. To the trading algorithm and its programmers at Simons's fund, such pesky details didn’t matter, as they were merely concerned with the historical trading correlations between securities. What companies sold or what they were called was entirely circumstantial.

Perhaps Simons, like Buffet, is merely the top outcome of an incredibly large initial sample; investors enter and exit all the time, with incapable and unlucky investors quickly relieved of their surplus cash. This survivor bias makes for complicated assessments and it's not clear that we can separate the Simonses and Buffets of the world from an incredibly complicated Baltimore stock trader scheme.

Then again, thirty years is a very long track record and while nobody seems to know what precisely is earning the Medallion Fund its outsized returns, they seem legit.

Zuckerman’s latest book is a great entry for those interested in machine learning in financial markets and particularly in Jim Simons and the fund he built.

Note: The views expressed on are not necessarily those of the Mises Institute.

When commenting, please post a concise, civil, and informative comment. Full comment policy here