This is my first post in a series of regular posts with thoughts on relevant investment topics. In our ongoing research on our own dilemmas, we recognize that these same issues are likely at the heart of events that we all face. With sometimes equal measure of frustration and excitement, we hope to contribute to the marketplace of information and discussion. With the goal of interaction and feedback, please reach out with responses or topics of interest.
When I was a kid in school and made a mistake on a math test, perhaps having missed a step in the process, the incredibly evident reality was that the final answer was wrong. Regardless of how confident I was in the procedure, an answer in which the two sides just did not equal made me acutely and immediately aware of an “error” somewhere in the process. I liked this about mathematics -- it was straightforward and it was honest, if you did something wrong, you knew it by the clear inaccuracy in the end result.
Later in schooling, when I plunged more deeply into statistics, I came to learn that there was a new type of “error” that was, quite literally, NOT an actual error. The “error term”, or simply, “error”, was merely the deviation of a single measurement (or one observed value) away from the true value (the average, or more formally, the expected value) of the population of the quantity being measured. For example, if the mean length of a sunflower petal was 1in, and in hopes of concluding that “she loved me”, I randomly plucked a petal measuring 1.25in, then that petal had an “error” of 0.25in. And if my next pluck of “she loved me not” measured only 0.80in, I had observed a wholly acceptable petal with an “error” of -0.20in. But those were both still sunflower petals, so the model was still right!
Suddenly, errors were no longer errors, they were just potential movements that, over long expanse of time and many observations, converged around a favored level (Central Limit Theorem). More importantly -- and what really turned my world upside down -- was that these new types of errors didn’t give me much insight into whether the thing I had done was actually wrong. I hadn’t necessarily found an error if a particular observation was a perfectly “standard” deviation. Better yet, the more extreme errors became mere multiples of “standard”. That’s not a complete and utter failure, a teacher could say, it’s just a 3 standard deviation error -- totally expected 1% of the time. What had the world come to? Could a model not just be wrong?
In my current life, as a quant who builds systematic trading models, this conundrum rears its uncomfortable head fairly often, and with even greater consequences. When an observed outcome of a model measures at a relatively outsized level, such as our performance this past month, we naturally ask, “what’s gone wrong… is it broken?” Well, maybe -- or maybe it’s just a deviation of some number of units, totally expected, and quite literally “standard”.
To resolve this difficult question, there are two roads to pursue, one quantitative, one qualitative. The quant method is relatively straightforward, and of course, logically consistent with the original premise; does the event fall within the expected distribution? And unfortunately, the only thing that will further this fit is the continued passage of time and accretion of more observations. Thus, with that response “set in motion” and left to its own rigorous standards, the secondary method is the qualitative view, which is to simply ask, does what happened still mesh with my hypothesis, and do I still believe my reasoning to be sound? In life, when the healthy eater who runs twice a week gets a damaging prognosis, and the overeating couch potato lives to a ripe old age, we don’t change our hypothesis that a healthy and active lifestyle leads to longevity.
The outlier does not break the hypothesis. Similarly, in the equity markets, one believes that more attractive fundamentals will outperform their industry peers with less attractive fundamentals. All else being equal, one would go long the superior “value” and short the inferior “value”. Fama-French won the Nobel Prize for substantiating that the value factor is so widely used, but that does not change the fact that it is economically rational and objectively sane. Healthy eating is overused, but it still leads to longer life.
Accordingly, when an anticipated mean reversion keeps “expanding”, models based on rational hypotheses, and extensive empirical confirmation, further increase their weighting to the proven probabilistic edge. However, the obvious negative outcome of “betting with the odds” is the divergence continuing, and consequently, further losses. And so goes the feedback loop with the same question being asked again; was this NEXT move in the same direction within tail expectation of the estimated distribution? And, once again, is the hypothesis still sound? If yes, continue, if no, stop. But can this feedback loop continue ad infinitum? No, it cannot, for any single path of any otherwise acceptable probabilistic process may be the path that takes an investor past the point of no return (pun intended). I shall talk more about the path dependency (i.e. ergodicity) in a future post…
Chief Investment Officer
Logica Capital Advisers, LLC