The power of statistics emerges as the sample size grows. I know, it has been repeated multiple times in Stats 101, a bunch of youtube videos, or maybe Statistics for Dummies. But has it stopped people from making judgement calls on purely empirical basis? Statements like “I’ve seen it working x times, so it’s legit” or “it’s a bad indicator because I tried on several stocks and it didn’t work” don’t really make sense when you are living in a complex realm composed by incredible amount of data, multiple dimensions of reality and endless chain reactions such as public administration and stock investments.
To illustrate, I did a back-test using a simple combination of Bollinger Bands and MACD Indicator from 2004 to 2012. It’s an end-of-day, mean-reversion strategy with a price filter and a liquidity filter. After testing it on 30 random stocks listed on TSX, this is what I got.
Nope, not impressive. But it looks quite different if we deploy it to the entire market, which is about 2300 stocks listed on TSX (1400 after using survivor filter ).
Commission is not a concern. As shown below, most of the time the strategy only holds less than 2 stocks, not 200.
The real problem for implementing this strategy, for retail investors, is computing power. Gathering latest data, completing calculation and executing trades right before market closes every day precisely is very challenging for individuals. For big players, it’s liquidity. Because the strategy targets low liquidity segments, it can’t guarantee the trading volumes will be big enough for institutional traders.