Non-linear Twists in Stock & Bond Correlation

Stocks and bonds are negatively correlated. Translation: they must move against each other most of the time. Because intuitively, stocks bear higher risk than bonds so investors go to stocks when they want to take more risks and flee to bonds when they feel a storm is coming. Plus the numbers tell the same story, too – correlation coefficient between SPY and TLT from 2002 to 2015 is -0.42, year-over-year correlation on daily returns are:


However, this effect was very week from 2004 to 2006. This makes sense because in a credit expansion like that, it was hard for any asset class to go down (except for cash, of course).

But this observation reveals that the conventional stock & bond correlation might be conditional or even deceptive. One might ask, is this stock & bond relationship significantly different in bull and bear markets? Does it also depend on market returns? Or does it just depend on market directions?

To keep it simple, I will stick to SPY and TLT daily returns. If I split my data into bull (2002-2007 & 2012-2015) and bear (2008-2011) periods, and divide each of them into two groups (market goes up & market goes down), then dice each group by quantiles of market returns, I will get:

corr_bull bear_corr

The graphs show that these two assets tend to dramatically move against each other when the market is going extremely up or down. Also this effect seems more pronounced when a bull market is having an up day or a bear market is having a down day. But there’s nothing significantly different between the bear and bull groups.

Next I can try not to split the data into bear & bull, instead I’ll just divide it by market direction, then quantile of performance.


This graph clearly shows that stocks & bonds mostly only move against each other when the market is having a extremely up or down day, either in a bull or bear market. Of course, one could argue that this is a self-fulfilling prophecy because big correlation coef’s feed on big inputs (large market movements), but in the chart the correlation coef’s do not change proportionally through quantiles, which confirms a non-linear relationship.



A Case of Ambiguous Definition

“Managers of government pension plans counter that they have longer investment horizons and can take greater risks. But most financial economists believe that the risks of stock investments grow, not shrink, with time.” – WSJ

This statement mentioned “risks” twice but they actually mean different things. Therefore the second sentence is correct by itself but cannot be used to reject the first one.

The first “risk” is timeless. The way it’s calculated always scales it down to 1 time unit, which is the time interval between any two data points in the sample.

Risk_1 = \sigma^2 = \frac{1}{N} \sum_{i=1}^{N}(R_i - \bar{R})^2

When “risk” is defined this way, a risky investment A and a less risky investment B have their returns look like this:


The second “risk” is the same thing but gets scaled for N time units. It’s not how variance is defined but people use it because it has a practical interpolation (adjust for different time horizons).

Risk_2 = N * \sigma^2 = \sum_{i=1}^{N}(R_i - \bar{R})^2

Under this definition, the possible PnL paths for A and B look like this:


A’s Monte Carlo result is wider than B, but both A and B’s “risk” by the second definition increases through time, while by the first definition never changed.

I have intentionally avoided mentioning time diversification because doing so would probably make things more confusing. For more details on this please see Chung, Smith and Wu (2009).


A New Post

First of all, apologies to anyone who were expecting new posts or left a comment here but didn’t get a reply from me. There were quite a few changes in my life and I simply had to move my time and energy on blogging somewhere else. Now I’m trying to get back to it.

Because of reasons I will avoid writing about specific investment strategies, factor descriptors or particular stocks. I will write more about my thoughts (or thoughts stolen from people smarter than me) on generic techniques and theories. In an attempt to be rigorous (or more realistically, less sloppy), I will try to stay on one main track: hypothesis -> logical explanations -> supporting data or observations. This time I will use math and programming to make sense of things instead of just getting results on paper.


Neural Network Algorithm

This post is a token of gratitude to Andrew Ng at Stanford University. I’ve learned a lot of fascinating concepts and practical skills from his Machine Learning course at Coursera. Among these algorithms neural network is my favourite so I decided to convert this class assignment built in Octave to functions in R. I will post my code below but if anyone who’d like to play with an exported package with documentations, please contact me by email, I will send you the tarball file. The code is also available on my github repository.

Comparing to the more comprehensive package “neuralnet” on CRAN, this is a lite version which consists all the basic components: single hidden layer, random initialization, sigmoid activation, regularized cost function, forward-propagation and backward-propagation. Originally it only works as a classifier, but I modified it a bit to create a similar algorithm to handle absolute values. I’m not sure if it’s the best practice for this kind of task but it worked as expected.

This is how the algorithm looks like, along with its source of inspiration, an actual human neural neuron. By mimicking the way a human brain (which is the most powerful computer in the universe) adjusts itself to recognize patterns and estimates the future, an algorithm should be able to solve similar tasks.

For the classifier, the cost function tracks the forward propagation process is defined below.

J(\Theta) = \frac{1}{m}\sum_{i=1}^{m}\sum_{k=1}^{K} \left [ -y_{k}^{(i)}log((h_{\Theta}(x^{(i)}))_{k})-(1-y_{k}^{(i)})log(1-(h(x^{(i)}))_{k}) \right ]

Where J is the total cost we want to minimize, m represents number of training examples, K represents number of labels (for classification). When training for the ith example with label k and weights \Theta , all corresponding correct output values will be converted to 1 and the rest will become 0, so -y_{k}^{(i)} and (1-y_{k}^{(i)}) works as switches for different scenarios. Function h_{\Theta}(x^{(i)}))_{k}=g(z) calculates the training output as z=\Theta^{T}x^{i} , then processes it with sigmoid function g(z)=\frac{1}{1+e^{-z}} in order to bound it between 0 to 1 as likelihood of being 1 (which is equivalent to probability of being true).

To avoid extreme bias caused by large weights, we add parameter regulation to the cost function. Defined as

\frac{\lambda}{2m}\left [ \sum_{j=1}^{h}\sum_{k=1}^{n}(\Theta_{j,k}^{(1)})^{2}+\sum_{j=1}^{o}\sum_{k=1}^{h}(\Theta_{j,k}^{(2)})^{2} \right ]

where n , h and o equal to numbers of nodes in the input layer, hidden layer and output layer respectively.

During the backward propagation process, we push the calculated outputs backwards through the algorithm and accumulate the gradient from each layer. Because there are only one hidden layer, we can calculate the gradient of the output layer as \delta^{3}=a_{k}^{3}-y_{k} and hidden layer as \delta^{2}=(\Theta^{2})^{T}\delta^{3}{\ast}g^{'}(z^{(2)}) , then accumulate it \Delta^{(l)}=\Delta^{(l)}+\delta^{(l+1)}(a^{(l)})^{T} to finally get the total unregularized gradient \frac{\partial }{\partial \Theta^{(l)}_{ij}}J(\Theta)=\frac{1}{m}\Delta^{(l)}_{(ij)} . In the end, we just need to regularize it like we did to the cost function by adding \frac{\lambda}{m}\Theta^{(l)}_{ij} .

The absolute value algorithm is very similar. Instead of using a 0/1 (True/False) switch to train examples with same labels all at once, it trains one example at a time and calculate the cost by squared error of each prediction (y^{(i)}-h_{\Theta}(x^{(i)}))^{2} .

Now we can wrap up a cost function and a gradient function to test it by running optim() provided by default R package “stat”. In this case I choose L-BFGS-B method because it utilizes everything that we’ve got in hand and seems learns faster than most of the others.

The classifier first. Three blocks of data are randomly generated and labelled for training. Because we intentionally created them for testing purpose, it’s not necessary to divide them for cross-validation and out-of-sample testing contrast to using it in real world.

Then for absolute values. I reset y values for this test and tried several values for learning speed lamda and number of iterations to get results below.


Calculus and statistics: different paradigms of thinking

One day I got a question from an academic star with almost perfect GPA in our university, “I did everything my professor asked us to do, I did regression on PE, PB ratio and stuff for prediction, why my target price for Goldman Sachs is still around $300? That’s too much off from everybody else in the market, I don’t understand.” I stared at him for more than 10 seconds, speechless.

As Arthur Benjamin proposed in his speech (link below), calculus has been on the top of the math pyramid for too long, now it’s time for statistics to take over. This matters because in general our mind is skewed too much towards the calculus-style, deterministic way of thinking. For one thing, it is much more intuitive for human mind to understand things that are either right or wrong; for another, people who claim they know everything and can “prove” that by making predictions always have much more audiences than those who say “this might be true, I could be wrong but it’s the closest we can get”. Also from my own experience, our education systems are mostly designed to reward people who get desired certainties, not people who comprehend things sensibly. Consequentially, students started to pick up the habit of “sacrifice reality for elegance” (Paul Wilmott) and the line between doing scientific research and confirming collective bias is blurred. Einstein was proven wrong about “God doesn’t play dice”, but regretfully that doesn’t stop ordinary people believing it’s possible to eliminate uncertainty in their own life.

On the opposite, not saying statistics is better, but it does focus more on observation and self-evaluation. Its purpose is not about to find the only perfect answer, instead it’s about seeing things from as many dimensions as possible (which is a very unnatural process to human brain). One accurate prediction doesn’t make a statistical model work; a long enough series of predictions under relatively bias free conditions with acceptable level of error does (now I kind of understand why people don’t like it…). Again, I don’t think it’s better than calculus, but I think this is the key to problems such as “if I did it right, how come I’m still losing money?”

Recommended Reading: Fooled by Randomness (N.Taleb)

Recommended Video: Arthur Benjamin’s formula for changing math education

Optimizing Ivy Portfolio

Early on I posted a simple live version of GTAA strategy. It demonstrated the effectiveness of the Ivy Portfolio (M.Faber, 2009) rationale in recent market with a small sample. Again, the rationale is very simple and powerful: screen a wide range of asset classes each week/month, then invest in those that have shown the strongest momentum. Last time I tracked 39 ETFs’ 9-month SMA and equally allocated portfolio assets to the top 8. Although I got pretty good results, the sample was relatively small and those ETFs are quite different in terms of time of inception, liquidity, tracking error, etc.

And above all, equal allocation seems a bit, for lack of a better word, boring. This time I want to use a more general sample to see how we can improve this by implementing some optimization strategies I’ve shown in my previous post Backtesting Portfolio Optimization Strategies.

Some equipment check before we launch the test.

Asset Classes:
1. SPX Index: S&P 500 LargeCap Index
2. MID Index: S&P 400 MidCap Index
3. SML Index: S&P 600 SmallCap Index
4. MXEA Index: MSCI EAFE Index
5. MXEF Index: MSCI Emerging Markets Index
6. LBUSTRUU Index: Barclays US Agg Total Return Value Unhedged USD (U.S. investment grade bond)
7. XAU Curncy: Gold/USD Spot
8. SPGSCI Index: Goldman Sachs Commodity Index
9. DJUSRE Index: Dow Jones U.S. Real Estate Index
10. GBP Curncy: GBP/USD Spot
11. EUR Curncy: EUR/USD Spot
12. JPY Curncy: JPY/USD Spot
13. HKD Curncy: HKD/USD Spot

1. Rebalance monthly
2. Rank 12-month SMA; invest in the top 3
3. For each asset, minimum weight = 5%; maximum weight = 95%
4. Use CVaR optimization to construct the portfolio each month; confidence level = 1%

Fortunately, our test didn’t fall apart and crash into the Pacific Ocean. The CVaR model seems did a good job improving the original strategy. However, it has to be pointed out that not all optimization models are better than an equal-weighted one. As demonstrated below, the minimum-variance and maximum-sharpe ratio models didn’t make much difference.


Backtesting Portfolio Optimization Strategies

Recently I’m trying to develop some handy tools to help backtest and analyse user-specified portfolio strategies. Now I’d like to do a quick demonstration through testing four portfolio strategies (minimum variance, maximum Sharpe Ratio, minimum CVaR, and equal-weighted) that rebalance weekly from 1990 to 2012.

Instead of specific stocks or ETFs, 10 S&P sector indices by GICS will be used as hypothetical assets. As aggregated equity trackers, they are similar in terms of market efficiency, liquidity, macro environment etc. And by using them we eliminated factors that jeopardize the quality of data such as IPO or survivorship bias. Moreover, most ETFs only emerged several years ago, with equity indices one can go back much further for a bigger sample.

Firstly, I got the data from Bloomberg and load it from a csv. file. Please note in order to transfer the raw data into a time-series object (zoo), we need to convert the dates in the csv. file into integers.

### load packages

raw_data <- read.csv(file.choose(), header = TRUE) ## open the file directly
raw_data[, 1] <- as.Date(raw_data[, 1], origin = '1899-12-30') ## convert numbers into dates
raw_data <- zoo(raw_data[, -1], = raw_data[, 1]) ## create a zoo object

With the data properly formatted, we can perform backtests based on our strategies.

minvar_test <- backtest(raw_data, period = 'weeks', hist = 52, secs = 10, model = 'minvar',
	reslow = .1 ^ 10, reshigh = 1)
sharpe_test <- backtest(raw_data, period = 'weeks', hist = 52, secs = 10, model = 'sharpe')
	cvar_test <- backtest(raw_data, period = 'weeks', hist = 52, secs = 10, model = 'cvar', alpha = .01)
equal_test <- backtest(raw_data, period = 'weeks', hist = 52, secs = 10, model = 'eql')

I’m not posting function backtest here because it’s quite bulky and has other optimization functions nested inside. But as you can see what it does is just take a zoo object, ask what strategy the user wants to test, and perform a backtest accordingly. By setting argument daily.track = TRUE, You can also track the portfolio’s position shift on daily basis. But due to time and space constrain I won’t show it this time neither.

Here’s the PnL curves of the tests.

And to see their position transitions, we need another function.

## returns a transitional map of a backtest strategy
transition <- function(allo, main = NA) {
	cols = rainbow(ncol(allo))
	x <- rep(1, nrow(allo))

	plot(x, col = 'white', main = main, ylab = 'weight', ylim = c(0, 1),
	xlim = c(-nrow(allo) * .2, nrow(allo)))

	polygon(c(0, 0, 1:nrow(allo), nrow(allo)), c(0, 1, x, 0),
	col = cols[1], border = FALSE)

	for (i in 2:ncol(allo)) {
		polygon(c(0, 0, 1:nrow(allo), nrow(allo)), c(0, 1 - sum(allo[1, 1:i]),
		x - apply(allo[, 1:i], 1, sum), 0), col = cols[i], border = FALSE)

	legend('topleft', colnames(allo), col = cols[1:ncol(allo)], pch = 15,
	text.col = cols[1:ncol(allo)], cex = 0.7, bty = 'n')

## visualize the transitions
par(mfrow = c(4, 1))
transition(cvar_allo, main = 'CVaR Portfolio Transition')
transition(minvar_allo, main = 'Min-Variance Portfolio Transition')
transition(sharpe_allo, main = 'Sharpe Portfolio Transition')
transition(equal_allo, main = 'Equally Weighted Portfolio Transition')

Scalability was taken into consideration when these functions were built. I look forward to nesting more strategies into the existing functions.