Ex-ante and Ex-post Risk Model – An Empirical Test

Whenever constructing a quant portfolio or managing portfolio risk, the risk model is at the heart of the process. A risk model, usually estimated with a sample covariance matrix, has 3 typical issues.

  1. Not positive-definite, which means it’s not invertible.
  2. Exposed to extreme values in the sample, which means it’s highly unstable through time and will be exploited by the optimizer.
  3. Ex-post tracking error is always larger than the ex-ante tracking error, given a stochastic component in the holdings, which means investors will suffer from unexpected variances, either large or small.

Issue 1 is a pure math problem, but issue 2 and 3 are more subtle and more related to each other. A common technique called shrinkage has been devised to solve these issues. The idea behind is to add more structure to the sample covariance matrix by taking a weighted average between itself and a more stable alternative (e.g. a single-factor model or a constant correlation covariance matrix). Two main considerations are involved in the usage of shrinkage: 1. what’s the shrinkage target, i.e. the alternative? 2. what’s the shrinkage intensity, i.e. the weight assigned to each matrix?

 

Links above provide details about these considerations. I did several tests to show how the differences between ex-ante and ex-post tracking errors vary when using different shrinkage targets and intensities.The test is done with 450 stocks that were both in the S&P500 by the end of Oct 2016 and had been listed for at least 10 years. An equal-weighted portfolio is formed using 2-year weekly data and is rebalanced every month.

The test is done with 450 stocks that were both in the S&P500 by the end of Oct 2016 and had been listed for at least 10 years. An equal-weighted portfolio is formed using 2-year weekly data and is rebalanced every month. The shrinkage intensity changes from 10% to 90% by 10% throughout the test. The spreads between the ex-ante and ex-post variances are recorded each week .

rplot

Rplot01.png

As shown above, the Ledoit-Wolf approach (single-factor, optimal intensity as derived by L&W) creates the least estimation error among all other approaches tested. Interestingly, the sample covariance matrix approach shows higher ex-ante risks than ex-post, which violates the theory mentioned above. This is possibly because in this test the ex-ante variances always stay constant for four weeks while the ex-post variances change every week, which amplifies the actual spread if we believe that they should move together over time.

 

Roy

An Empirical Mean Reversion Test on VIX Futures

VIX mean reversion trade gets popular when the market experiences big ups and downs. You hear a lot of talks about how much money people make from trading VXX, XIV and their leveraged equivalents. However, is VIX truly mean reverting, or it seems more lucrative than it is just because people only like to talk about it when they make money and keep quiet when they lose?

In this post I use daily returns of S&P 500 VIX Short-Term Futures Index from December 2005 to August 2015 (2438 observations) to find if there’s empirical evidence that supports short-term VIX MR. It’s the most suitable vehicle for this test becuase there’s no instrument that tracks VIX spot and it is the benchmark for VXX and XIV.

VIX ST Futures Index holds VIX 1-month and 2-month futures contracts and rolls them on daily basis. Its performance suffers from contango effect like commodity futures ETFs do but it’s an inevitable cost in this case.

To find out if extreme VIX returns lead to strong short-term rebound, I group all daily returns by deciles and summarize the distributions of the accumulative future returns of each group up to 5 trading days (1 week). If VIX is truly ST MR, we should see the future returns following the 1st group (lowest) significantly higher than 0 on average, and the returns following the 10th group (highest) significantly lower than 0 on average. Future returns that go beyond the sample time period are recorded as 0.

Daily_Ret_Deciles

As shown above, group 1 and group 10 are the two groups we want to focus on. If someone can systematically make money by putting in MR trade on VIX, we should see the next day (or next 2, 3, maybe 5 days) returns following these two groups distributed like this:

Theoretical_Dist

The actual data look like this:

Day_1_Ret_Dist Day_5_Ret_Dist

It’s hard to spot any major difference between group 1 and 10 on the next day. However, on the 5th day, accumulated returns in group 1 largely outperform group 10. To better illustrate, I perform t-test on both groups from day 1 to day 5, as reported below (H0 = average return equals 0).

Capture

As it turns out, future returns in Group 1 systematically outperform Group 10 within the next 5 days. The results are not blessed by overwhelmingly strong t-stats and p-values but it’s hard to argue that we are looking at random noises here. Additionally, this test is done by end of day prices. Intraday movements, which possibly constitute the bulk of VIX MR trades, are completely ignored by this test. Therefore the results we see are likely a mitigated version of market reality.

Roy

Constructing an Alpha Portfolio with Factor Tilts

In this post I’d like to show an example of constructing a monthly-rebalanced long-short portfolio to exploit alpha signals while controlling for factor exposures.

This example covers the time period between March 2005 and 2014. I use 477 stocks from S&P500 universe (data source: Quandl) and Fama-French 3 factors (data source: Kenneth French’s website) to conduct backtests. My alpha signal is simply the cross-sectional price level of all stocks – overweighting stocks that are low on price level and underweighting the ones that are high. By doing this I’m effectively targeting the liquidity factor so it worked out pretty well during the 2008 crisis. But that’s beside the point, for this post is more about the process and techniques than a skyward PnL curve.

At each rebalance, I rank all stocks based on the dollar value of their shares, then assign weights to them based on their ranks inversely, i.e., expensive stocks are getting lower weights and vice versa. This gives me naive exposure to my alpha signal. However, my strategy is probably exposed to common factors in the market. By the end of the day, I could have a working alpha idea and a bad performance driven by untended factor bets at the same time. This situation calls for a technique that gives me control for factor exposures while still keeping the portfolio close to the naive alpha bets.

Good news: the basic quadratic programming function is just the tool for the job – its objective function can minimize the sum of squared weight differences from to the naive portfolio while the linear constraints stretching factor exposures where we want them to be. For this study I backtested 3 scenarios: naive alpha portfolio, factor neutral portfolio and a portfolio that is neutral on MKT and HML factor but tilts towards SMB (with a desired factor loading at 0.5). As an example, the chart below shows the expected factor loadings of each 3 backtests on the 50th rebalance (84 in total). Regression coefficients are estimated with 1-year weekly returns.

fct_expo

After the backtests, I got 3 time-series of monthly returns for 3 scenarios. Tables below show the results of regressing these returns on MKT, SMB and HML factors. All three strategies yield similar monthly alpha, but the neutral portfolio mitigated factor loadings from the naive strategy significantly, while the size tilt portfolio kept material exposure to the SMB factor.

Capture

Tables below summarize the annualized performance of these backtests. While the neutralized portfolio generates the lowest annualized alpha, it ranks the highest in terms of information ratio.

info_ratio

Interpretation: the naive and size portfolio gets penalized for having more of their returns driven by factor exposures, either unintended or intentional. The neutral portfolio, with slightly lower returns, gets a better information ratio for representing the “truer” performance of this particular alpha idea.

The idea above can be extend to multiple alpha strategies and dozens of factors, or even hundreds if the universe is large enough to make it feasible. The caveat is that there is such thing as too many factors and most of them don’t last in the long run (Hwang and Lu, 2007). It’s just not that easy to come across something that carries both statistical and economic significance.

Roy

Non-linear Twists in Stock & Bond Correlation

Stocks and bonds are negatively correlated. Translation: they must move against each other most of the time. Because intuitively, stocks bear higher risk than bonds so investors go to stocks when they want to take more risks and flee to bonds when they feel a storm is coming. Plus the numbers tell the same story, too – correlation coefficient between SPY and TLT from 2002 to 2015 is -0.42, year-over-year correlation on daily returns are:

corr_yoy

However, this effect was very week from 2004 to 2006. This makes sense because in a credit expansion like that, it was hard for any asset class to go down (except for cash, of course).

But this observation reveals that the conventional stock & bond correlation might be conditional or even deceptive. One might ask, is this stock & bond relationship significantly different in bull and bear markets? Does it also depend on market returns? Or does it just depend on market directions?

To keep it simple, I will stick to SPY and TLT daily returns. If I split my data into bull (2002-2007 & 2012-2015) and bear (2008-2011) periods, and divide each of them into two groups (market goes up & market goes down), then dice each group by quantiles of market returns, I will get:

corr_bull bear_corr

The graphs show that these two assets tend to dramatically move against each other when the market is going extremely up or down. Also this effect seems more pronounced when a bull market is having an up day or a bear market is having a down day. But there’s nothing significantly different between the bear and bull groups.

Next I can try not to split the data into bear & bull, instead I’ll just divide it by market direction, then quantile of performance.

cor1

This graph clearly shows that stocks & bonds mostly only move against each other when the market is having a extremely up or down day, either in a bull or bear market. Of course, one could argue that this is a self-fulfilling prophecy because big correlation coef’s feed on big inputs (large market movements), but in the chart the correlation coef’s do not change proportionally through quantiles, which confirms a non-linear relationship.

Roy

A New Post

First of all, apologies to anyone who were expecting new posts or left a comment here but didn’t get a reply from me. There were quite a few changes in my life and I simply had to move my time and energy on blogging somewhere else. Now I’m trying to get back to it.

Because of reasons I will avoid writing about specific investment strategies, factor descriptors or particular stocks. I will write more about my thoughts (or thoughts stolen from people smarter than me) on generic techniques and theories. In an attempt to be rigorous (or more realistically, less sloppy), I will try to stay on one main track: hypothesis -> logical explanations -> supporting data or observations. This time I will use math and programming to make sense of things instead of just getting results on paper.

Roy

Optimizing Ivy Portfolio

Early on I posted a simple live version of GTAA strategy. It demonstrated the effectiveness of the Ivy Portfolio (M.Faber, 2009) rationale in recent market with a small sample. Again, the rationale is very simple and powerful: screen a wide range of asset classes each week/month, then invest in those that have shown the strongest momentum. Last time I tracked 39 ETFs’ 9-month SMA and equally allocated portfolio assets to the top 8. Although I got pretty good results, the sample was relatively small and those ETFs are quite different in terms of time of inception, liquidity, tracking error, etc.

And above all, equal allocation seems a bit, for lack of a better word, boring. This time I want to use a more general sample to see how we can improve this by implementing some optimization strategies I’ve shown in my previous post Backtesting Portfolio Optimization Strategies.

Some equipment check before we launch the test.

Asset Classes:
1. SPX Index: S&P 500 LargeCap Index
2. MID Index: S&P 400 MidCap Index
3. SML Index: S&P 600 SmallCap Index
4. MXEA Index: MSCI EAFE Index
5. MXEF Index: MSCI Emerging Markets Index
6. LBUSTRUU Index: Barclays US Agg Total Return Value Unhedged USD (U.S. investment grade bond)
7. XAU Curncy: Gold/USD Spot
8. SPGSCI Index: Goldman Sachs Commodity Index
9. DJUSRE Index: Dow Jones U.S. Real Estate Index
10. GBP Curncy: GBP/USD Spot
11. EUR Curncy: EUR/USD Spot
12. JPY Curncy: JPY/USD Spot
13. HKD Curncy: HKD/USD Spot

Rules:
1. Rebalance monthly
2. Rank 12-month SMA; invest in the top 3
3. For each asset, minimum weight = 5%; maximum weight = 95%
4. Use CVaR optimization to construct the portfolio each month; confidence level = 1%


Fortunately, our test didn’t fall apart and crash into the Pacific Ocean. The CVaR model seems did a good job improving the original strategy. However, it has to be pointed out that not all optimization models are better than an equal-weighted one. As demonstrated below, the minimum-variance and maximum-sharpe ratio models didn’t make much difference.

roy

Backtesting Portfolio Optimization Strategies

Recently I’m trying to develop some handy tools to help backtest and analyse user-specified portfolio strategies. Now I’d like to do a quick demonstration through testing four portfolio strategies (minimum variance, maximum Sharpe Ratio, minimum CVaR, and equal-weighted) that rebalance weekly from 1990 to 2012.

Instead of specific stocks or ETFs, 10 S&P sector indices by GICS will be used as hypothetical assets. As aggregated equity trackers, they are similar in terms of market efficiency, liquidity, macro environment etc. And by using them we eliminated factors that jeopardize the quality of data such as IPO or survivorship bias. Moreover, most ETFs only emerged several years ago, with equity indices one can go back much further for a bigger sample.

Firstly, I got the data from Bloomberg and load it from a csv. file. Please note in order to transfer the raw data into a time-series object (zoo), we need to convert the dates in the csv. file into integers.

### load packages
require(TTR)
require(tseries)
require(quadprog)
require(Rglpk)
require(quantmod)

raw_data <- read.csv(file.choose(), header = TRUE) ## open the file directly
raw_data[, 1] <- as.Date(raw_data[, 1], origin = '1899-12-30') ## convert numbers into dates
raw_data <- zoo(raw_data[, -1], order.by = raw_data[, 1]) ## create a zoo object

With the data properly formatted, we can perform backtests based on our strategies.

minvar_test <- backtest(raw_data, period = 'weeks', hist = 52, secs = 10, model = 'minvar',
	reslow = .1 ^ 10, reshigh = 1)
sharpe_test <- backtest(raw_data, period = 'weeks', hist = 52, secs = 10, model = 'sharpe')
	cvar_test <- backtest(raw_data, period = 'weeks', hist = 52, secs = 10, model = 'cvar', alpha = .01)
equal_test <- backtest(raw_data, period = 'weeks', hist = 52, secs = 10, model = 'eql')

I’m not posting function backtest here because it’s quite bulky and has other optimization functions nested inside. But as you can see what it does is just take a zoo object, ask what strategy the user wants to test, and perform a backtest accordingly. By setting argument daily.track = TRUE, You can also track the portfolio’s position shift on daily basis. But due to time and space constrain I won’t show it this time neither.

Here’s the PnL curves of the tests.

And to see their position transitions, we need another function.

## returns a transitional map of a backtest strategy
transition <- function(allo, main = NA) {
	cols = rainbow(ncol(allo))
	x <- rep(1, nrow(allo))

	plot(x, col = 'white', main = main, ylab = 'weight', ylim = c(0, 1),
	xlim = c(-nrow(allo) * .2, nrow(allo)))

	polygon(c(0, 0, 1:nrow(allo), nrow(allo)), c(0, 1, x, 0),
	col = cols[1], border = FALSE)

	for (i in 2:ncol(allo)) {
		polygon(c(0, 0, 1:nrow(allo), nrow(allo)), c(0, 1 - sum(allo[1, 1:i]),
		x - apply(allo[, 1:i], 1, sum), 0), col = cols[i], border = FALSE)
	}

	legend('topleft', colnames(allo), col = cols[1:ncol(allo)], pch = 15,
	text.col = cols[1:ncol(allo)], cex = 0.7, bty = 'n')
}

## visualize the transitions
par(mfrow = c(4, 1))
transition(cvar_allo, main = 'CVaR Portfolio Transition')
transition(minvar_allo, main = 'Min-Variance Portfolio Transition')
transition(sharpe_allo, main = 'Sharpe Portfolio Transition')
transition(equal_allo, main = 'Equally Weighted Portfolio Transition')

Scalability was taken into consideration when these functions were built. I look forward to nesting more strategies into the existing functions.

roy