In his 1998 paper, Jonathan Berk illustrated that by sorting stocks based on a variable (e.g. B/E ratio) correlated to a known variable (e.g. beta), the power of the known variable to predict expected return within each group diminishes when tested with cross-sectional regression. This is very likely why Fama and French found the explanatory power of beta disappeared (1992) and Daniel and Titman discovered that stock characteristics matter more than covariances (1997). For researchers and data analysts, this is a perfect example of how seemingly harmless manipulation of data can cause meaningful loss of information. If not careful, such loss can lead to confusing or even completely wrong conclusions.
The intuition behind this issue is rather simple: when data gets divided into smaller groups and tested separately, the error of beta estimation becomes “louder” as the sample size gets smaller. The error-minimizing advantage from using a large sample diminishes as the sample is divided into smaller groups, as the error of estimation overwhelms the useful information in each group.
Getting the intuition is one thing, identifying where exactly the issue occurs and tracing it through the proof is a different story.
Assume CAPM holds: , in which the systematic risk of stock is . Realized return is the same as expected return
Scenario 1: CAPM is tested cross-sectionally with a full sample with infinite number of stocks and there’s no estimation error between theoretical beta and estimated beta. i.e., . The coefficient of this regression is:
Interpretation: stock returns are perfectly linear (coef = 1) to their exposure to the market risk premium; beta is the perfect predictor of stock returns.
Scenario 2: there’s error in estimated beta, i.e., , . This is where the trouble originates. The existence of gave birth to the original noise , which will get passed down through the rest of the test. As we can see, the coefficient of the same test is already contaminated:
* Assuming estimated and observed returns are the same for convenience.
Interpretation: , stock returns are less sensitive to how much systematic risk they are bearing; beta is less of a perfect predictor of stock returns.
Scenario 3: now all stocks are sorted into N fractiles by a variable linearly correlated to beta. Within the jth fractile, the conditional variance of is now redefined:
where is a concave-up function that “shrinks” when all stocks are in the jth fractile (a partial integral of the full sample).
Run the regression test again the coefficient is now:
Interpretation: , a term born from the sorting process, is now serving as a “noise amplifier”. It enhances when it gets smaller and dampens the coefficient as a result. As a concave-up function, it gets smaller when N is larger and/or j moves closer to the middle among groups. The graph below shows how the coefficient changes with when is fixed at 1, and
To illustrate with actual data, 2,000 stock betas are randomly generated with mean 1 and standard deviation 0.50; 2,000 expected returns are calculated using these betas, market return 6.00% and risk-free rate 1.00%; estimated betas are calculated by adding 2,000 random errors with mean 0 and standard deviation 0.05. All estimated returns are ranked from low to high and this will be used as the basis for sorting. In summary:
- Number of stocks k = 2,000
Test for scenario 1. Run regression . We get , . Essentially perfect fit.
Test for scenario 2. Run regression . We get , .
Test for scenario 3. Run regression .
By setting N = 5, 10, 20, 50, respectively, the coefficients in each group are as follows:
The results are consistent with Berk’s findings. The more groups the stocks are sorted into, the less predictive power beta has on expected returns; the further away j moves towards the center among all groups, the more pronouncing this effect gets.