This is an interactive narrative map. Expand, drag and select.
This is an interactive narrative map. Expand, drag and select.
FOR MOST PENSION funds, a persistent conundrum is what to do about their strategic asset allocation strategy. when markets show considerable long-term momentum or extreme points of valuation?
FOR MOST PENSION funds, a persistent conundrum is what to do about their strategic asset allocation strategy when markets show considerable long-term momentum or extreme points of valuation?
You only need go back to March 2003, when most US pension funds seemed dangerously under-funded, to witness one of the most heated debates on the topic. Up until that point the majority of US funds had become adherents to the fixed asset allocation strategy of longterm investing.
That strategy was predicated on the view that if investment managers were failing at actively timing the asset allocation decision then surely a prudent policy was to keep them on a tight rein in the overall process by requiring a constant rebalancing back to the long-term strategic asset allocation benchmark.
The effect of doing that was to theoretically shift the active performance driver to the stock selection level, where, statistically speaking, asset managers had better odds of success.
But, in March of 2003 – with the equity market still on a precipitous decline – even the highly esteemed pension fund consultant and economic historian Peter Bernstein suddenly lost heart, declaring that perhaps it was time to embrace market timing with open arms.
Bernstein argued that the assumption that stocks outperform bonds over the long term simply wasn’t true for every 10- year period. While the notions of equity risk premium and reversion to the mean, long embraced by institutional investors might still be viable as theories, volatility could kill you first.
Bernstein argued that economic storms are the norm – and we better structure our solutions in anticipation. “The ocean will never be flat soon enough to matter. Equilibrium and central valuations are myths, not the foundations on which we can build our structures. We cannot escape the short run.”
In SA, pension funds may have arrived at a similar distrust of fixed asset allocation strategies around a strategic benchmark, but for different reasons altogether. Given the market environment over the past three years, when SA equities had run nearly 200%, fixed asset allocation strategies looked positively anaemic by comparison to managers who could shift their equity allocations up to the regulatory limit of 75%.
Trustees concluded that too much was being sacrificed by not having a policy that allowed a manager to add value by market timing.
Still, five months after Bernstein’s famous pronouncement he found himself apologetically but emphatically revising his “radical” notion of abandoning the long-term strategic benchmark. As he explained: “It was a case of the right church but wrong pew.”
Controlling against value destruction through eliminating a fund’s exposure to low probability investment decisions may well have merit in theory. But in reality asset classes do seem to exhibit a remarkable degree of performance persistence over certain critical periods. Trustees ignore that phenomenon at their peril.
What’s required to address this, then, is a framework that allows for some degree of revision in the asset allocation decision when these extreme periods of trending take place – without having to resort to the less successful approach of market timing and return forecasting.
In 2003, Seth Masters, of Alliance Capital, published a paper in the Journal of Portfolio Management that provided the first critical step forward. Masters proposed that an investor could capture some element of price momentum in asset classes in a fund’s rebalancing strategy.
Unlike traditional rebalancing strategies that simply relied on tolerance bands around the strategic asset allocation to indicate when the asset mix should be rebalanced, Masters devised a formula that recognised that a rebalancing strategy could be more effectively applied if it incorporated three additional critical factors: the correlations between different asset classes in the mix, their volatilities, the implied cost of a given rebalancing strategy and, finally, the risk tolerance of the investor to an unbalanced asset mix.
TriggerPoint = 2*(Risk_Tolerance)*(Cost) = 2*K*Ci
Effectively rebalancing is only initiated if there’s a marginal net benefit to a portfolio’s risk profile after costs. And because the portfolio is only rebalanced halfway back to the portfolio’s initial asset mix, the framework also allows for some momentum in asset class prices. For us – though studies by SA researchers1 were able to demonstrate that the strategy added value – it only constituted a baby step forward. In lengthy discussions with a number of academics in the economics and statistics departments at North-West University, we argued that the key factor in Masters’ equation was the risk aversion parameter – the “K-factor”.
Surely, for most investors their aversion to risk must change as asset classes reached extreme levels of valuation? As markets appeared to be continuing to trend we’d want to let our asset mix ride. However, the moment we believed we were reaching a point of inflection we’d want to rebalance back to our optimal asset allocation as tightly as possible. What we were effectively looking for was something that would do the following: In other words, as the graph suggests, by changing the K-factor as the market trends (by assigning a higher risk aversion number) or turns (by assigning a lower risk aversion number) we effectively change the trigger points, causing
the asset’s active weight to trend with the market or rebalance towards the target weight when the asset is expected to turn/revert.
In January 2005, Cadiz Quantitative Research produced their own findings on this problem – “Dynamic rebalancing”. It provided a critical key to addressing the problem.
Cadiz’s refinement of Masters’ approach was to allow some variability in the tolerance bands around each asset class by changing the K-factor or risk aversion parameter as asset classes reached a point of extreme valuation.
Cadiz was able to demonstrate that even the simplest signalling techniques – such as moving averages, exponentially weighted moving averages and autocorrelations – provided value in deriving a trigger point that would allow the asset mix to “ride” with momentum just that much longer. The key point was that here was additional alpha that could be captured without resorting to forecasting models.
But Cadiz acknowledged it wasn’t within the scope of their study to find signalling techniques grounded in fundamental valuations. Nor did Cadiz address the fact that most clients have no idea as to how to determine what their K-factor (risk aversion) is at any point in time, much less at market extremes. That was the challenge we took on.
Testing the predictive power of a number of different traditional valuation measures supplied by our chief strategist for determining these points of price inflections has the easy part. All of the measures tested captured some element of valuation differentials between asset classes. However, the surprising outcome was that the most basic of these measures – the relative attractiveness of bond yields to earnings yield – often proved to be the most powerful. Not rocket science work, but certainly consistent with academic findings. And when combined with the three momentum indicators first tested by Cadiz the result was particularly robust.
But the difficult part of the exercise was how to determine our client’s specific K-factor. That we tackled using an iterative scenario-testing process using the client as a guinea pig. To facilitate a discussion we simply worked through what the client believed was the maximum deviation they could tolerate at any given point from the predetermined asset mix. In this specific case, the client determined that that maximum point to be 6%.
Using that parameter we then solved for the K-factor ranges that would elicit for each asset class given the most aggressive (when the market is trending) and the most conservative (when the market is reverting) scenarios. Back-testing over various window periods was necessary as the converted formula is influenced by asset class correlations (used to calculate tracking error) and is different for varying market conditions.
That means the formula for calculating trigger points needs to be revisited and rearranged to solve for K:
By assigning different possible outcomes to the cost and tracking errors parameters it was possible to calculate a distribution for the risk aversion parameter per allowable tolerance band. Repeating the simulation exercise and solving for K we were in effect solving the K-factor parameters, which allowed the asset mix to drift up to a maximum specified trigger point input parameter. Graph 2 illustrates.
To assess the robustness of our new rebalancing framework we then tested it over a period when the markets underwent their most significant performance reversals for the past eight years: the period between December 2002 and April 2003. By incorporating the K-factor parameters produced by the 6% maximum tolerance levels with our four-factor signal into our rebalancing model, the asset mix shifts shown in graph 3 were generated over the period. Note the significant down weighting in equities that takes place between December 2002 and April 2003.
How much value can this rebalancing framework add? Formulating the right expectations is key. That still isn’t a strategy that will tolerate dramatic shifts in asset mix – but the incremental added value over time isn’t trivial. Though tests were run for a variety of different market conditions the histogram below is particularly representative of what investors could expect from applying the dynamic rebalancing framework.
framework. Graph 4 shows back-tested results showing the risk-adjusted return outcome for the full range of rebalancing strategies.
While the effectiveness of any rebalancing strategy is clearly time-period specific, there’s no question that over long sustainable investment periods dynamic rebalancing more than earns its keep as an alpha enhancing process.
Clearly, we’re only just beginning to explore the possibilities of employing this framework. But while we’ll continue to search out more effective ways of capturing and measuring the model’s true potential, note that the model inputs can be pushed only just so before that whole framework reverts to an unhealthy dependency on forecasting tools. We’ve found that if we’re not too ambitious in terms of what we demand of the model our persistence is rewarded.
1. Cadiz Quantitative Research, November 2003.
Please upgrade your internet browser to Edge or use Chrome for a better experience.
(NB: Your current browser is no longer supported.)