I came across an incredible piece of research in a statistics book called, Noise, that is really quite shocking, especially if you have studied many years and regard yourself as an expert, I know I was quite shocked but strangely not surprised.
In 1954 a professor of psychology Paul Meehl, a polymath with expertise in the psychology of the paranormal, math, law, philosophy, religion and practically every other field of knowledge shocked the thinking world when he published a book, Clinical Versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence. You are probably wondering what the book was about that proved such a sensation.
In essence the book is about a series of studies where Meehl gets Ph.D. level experts to predict the future success of job candidates and the prognosis of psychiatric patience based on assessment data conducted on the subjects. Meehl used a simplistic regression model to make “mechanical” judgements which he compared with expert judgements. [there is a lot of technical detail in understanding the statistical analysis taking place but for the purposes of this letter its not relevant].
The shocking results are that almost always the mechanical model with no expertise produces better results than the so called experts. This is what in behavioural economics is called the illusion of validity.
Illusion of validity is a cognitive bias in which a person overestimates their ability to interpret and predict accurately the outcome when analyzing a set of data, in particular when the data analyzed show a very consistent pattern—that is, when the data “tell” a coherent story.[wikipedia]
If this blows you away like it did me, there has been a huge amount of research invested into replicating and advancing this theory. You can look up research by psychologist Paul Hoffman and Lewis Goldberg in the 60’s that further advanced Meehls research and found it robust. It is interesting that Meehl himself was a psychoanalyst who continued to see patients believing he could add value and was never deterred from being an expert despite his research results.
Just to step this thesis up a little, Martin Yu and Nathan Kuncel stepped up Goldberg’s findings and created a research experiment where they analysed 847 executive candidates for a consulting firm. One group were experts who used 7 predictor inputs into making their assessment. The second group used a simple linear regression model like Meehls and Goldberg, the third group took 10 thousand sets of random weights for 7 predictors and applied a random linear model to predict job performance. The shocking find was that any linear model random or not was able to always outperform the experts.
In the context of my job allocating capital and my part time job trading, I think there are some valuable takeaways albeit sometimes confronting.
Ditto’s main value proposition is our ability to identify trading talent. I have been doing this for 10+yrs. As a trader for decades and someone who rates his ability to read people I started off thinking I could add value to the process by interviewing traders to try and get in their head and understand what makes them and their strategies tick and be an expert at forecasting who will be a successful trader.
If there is one thing I have learned from my understanding of statistics and forecasting under conditions of uncertainty is that a mechanical process has proven to produce better results for us than when we apply our judgement. I am writing this note to myself and my partners to trust our process. We have spent the past 10yrs refining that process so let us stick with it despite what our foolish expertise might want to override. Let me qualify this by saying I still maintain our expertise is required around certain areas of the due diligence process where inputs are not clear cut and also the basic filter criteria for talent.
The second part of what this letter has reinforced for me is something that I have known forever and still struggle to formally implement. I still want to apply a discretionary (judgement) based trading approach, I will argue that within Meehls’ thesis there is still scope for this, but the judgement around money management, i.e. position sizing, rules around drawdown, etc needs to be mechanical and this is where I still fail. I have traded my best when I have written up my trading rules and posted them where I can always see them and I hold myself to account. When breaking these mechanical rules because I believe as an expert I know best it has always lead to erratic sub par results.
As hard as this may sound, we need to try and get out of our own way as much as possible. Let me leave you with a principle my dad always used to say to me, KISS (keep it simple, stupid).