What this tells us is that even if we are presented with two portfolios of identical strategic equity allocations, performance differentials would exist between them if each portfolio had a different underlying benchmark or performance target mandated to them.
For example, of all three indices, the ALSI has the highest resources sector exposure. When the market collapsed in 2008, resources were instrumental in leading the collapse. It should be no surprise then that growth portfolios (with a heightened asset allocation to equities) referencing the ALSI would underperform similar growth portfolios not benchmarked to the ALSI. However, in the three-year period preceding the crash, exactly the opposite outcome was reflected. More recently, the underlying ebb and flow of super sector-driven performance still influences the return trends of portfolios with different benchmarks.
Another example looks at portfolios that employ the Capped SWIX index. This is an index in which all constituents with a weight greater than 10% in the index will be capped at a fixed level of 10%. The Capped SWIX addresses the diversification and concentration concerns inherent in the SWIX (which addressed the resource super sector concentration prevalent in the ALSI, but not individual stock concentration). If we compare a Capped SWIX-referenced portfolio to an ALSI or SWIX-referenced portfolio, where a share like Naspers could represent well over 20% of the index, we would expect the Capped SWIX-referenced portfolio to underperform the ALSI and SWIX-referenced portfolios during a time when Naspers rallied.
In both examples, the issue of comparably higher or lower absolute returns is a function of risk referencing to different benchmarks. This will necessarily lead to different performance outcomes – up to a point – depending on the index concentration biases at play.
Managing concentration risk reduces the probability of a single market event affecting a portfolio’s value. The management of a portfolio’s concentration risk, therefore, should form an integral part of the portfolio management process. This could take the form of applying risk allocation limits on single stock exposures on both an absolute and relative basis. For example, monitoring single stock risk exposure within certain parameters, say between 10% and 15%, of total portfolio risk. It must be pointed out that this level of risk mitigation is more applicable to equity-only portfolios where this concentration risk is exacerbated relative to multi-asset portfolios.
So, if we are comparing performance figures between various portfolios, it’s not enough to only ensure that they are genuinely comparable from a strategic asset allocation perspective, but we also need to assess each portfolios performance in the context of their mandated benchmarks. Because of the varying magnitude of risk biases inherent in each index, there will be times when the performance differentials between like-for-like portfolios that employ different underlying benchmark indices could be significant.
Appropriateness
Good performance does not necessarily mean the right performance for your needs
Because pension funds have their own unique needs and risk profiles, an appropriate portfolio should be one designed to deliver the highest likelihood of meeting required investment objectives over the long term and in line with specific risk constraints. The validity of a portfolio must be considered in the context of a fund’s own long-term objectives and risk budget.
What is not immediately obvious from performance figures is the amount of risk taken over time to deliver the performance. Today, most portfolio comparisons are supplemented with a risk measure alongside the performance figures over different time periods. But, it is only when looking at how the risk profile of the fund has changed over time that you really get a better sense of whether a portfolio's design manages risk carefully and consistently, within a certain ‘risk budget’. In other words, assessing whether the portfolio has been positioned to take on varying degrees of excess risk in the pursuit of delivering superior performance.
Depending on portfolio management guidelines, portfolios can, and do, take big investment positions that can work out well and set them apart as a top performer for a period. But, they can also go wrong and it could then take a portfolio a long time to make up the lost ground, let alone deliver the returns fund’s need to achieve investment objectives. Taking more risk does not always equate to higher returns. Ultimately, funds need to choose the portfolio that matches its appetite for risk. The only way to do this is to get a deeper understanding of what that risk profile has looked like over time.
Performance differentials can also be a function of the differences in how risk management is defined in a portfolios’ mandates. In other words, is the portfolio’s risk constraint defined as capital loss, absolute return volatility or deviation to the benchmark (tracking error)? The answer to this could yield significantly different structural designs between portfolios that determine how much excess risk they can assume.
So, when comparing headline performances, it is important to establish what whether a portfolio is one that demonstrates a consistent risk profile or one that demonstrates considerable variations in its risk positions through time. Funds that remain conscious of their long-term investment objectives as well as the degree of risk they are willing to take to reach those objectives will stand a better chance of considering and evaluating those portfolios right for them.
Performance path
The importance of a meaningful timeframe
Performance figures are reflected over a particular point-in-time and taken over specified time periods (one-, three- and five-years). Often, performance is also reported in rolling windows of time over such periods. The problem with this is that these performance numbers are as much a function of the latest month’s performance data as it is the dropping off the first month’s performance data. That’s because the reporting timeline moves onto the next month, and then the next month and then the next month after that.The graph below is one such example which illustrates this point using a timeframe from October 2018 to October 2019, but there are many other periods one could use to arrive at a similar conclusion.