A fully quant system will normally only analyse funds, not the manager themselves, because managerial style and performance should play out in the data. By proxy, a quant system may identify star managers and, in turn, star funds, and independent of any ‘relationship’. The same can be said for investment style or cap-size.
Take the UK All Companies sector for example. If a particular investment style or cap-size is performing well, a quant system should pick-up on it and make buy-in recommendations accordingly, an example being quant systems may make moves from mid to large-cap and from value to growth (and vice versa).
But what happens if a fund manager leaves or if underlying holdings change? Well, it depends on how it affects the performance of the fund. If the fund performance changes for the worse, then any quant system worth its salt should pick-up on it and make a sell recommendation. Conversely, if these changes mean that the fund performs better, then the same quant system should continue to hold the fund.
Let’s consider Neil Woodford (as I’m sure everybody in this space has recently). The CleverEngine (the quantitative fund screening system that drives Clever’s managed portfolio service and in-house fund screening system, CleverAdviser) didn’t pick his Equity Income fund because this quant system requires at least 36 months of performance data, for any fund, before it’s considered. Why? Because it needs to see a sustainable track record before buying-into a fund, as should any fund screening system, quant or human powered. When Woodford’s fund did appear on the system after 3 years, its performance did not score high enough versus its peer group to be picked.
Would Clever’s system have selected a Woodford fund 10 years ago? Yes, and it did. The Invesco High Income UK and Invesco Income UK funds were selected for substantial periods between May 1999 and August 2013, both of which were managed by Woodford at the time.
The point that “too many people trim their winners and don’t trim the losers”, discussed in Portfolio Adviser’s article, is an excellent one because if a sector has a bad year then as long as a fund is performing well, compared to its peer group, it should be held. Afterall, who doesn’t want to stick with a winning fund? Clever aims to do that by recommending (and staying with) funds that are performing well according to the numbers and moving out of funds which are losing performance momentum, no matter who manages them.
Of course, quant systems like Clever don’t win all the time, but the question is how often do they win?
In the early part of 2018, the company approached the Hartree Centre which is a centre of excellence for high performance computing, big data analytics and cognitive technologies run by the Science & Technology Facilities Council. The goal was to see if machine learning or artificial intelligence (AI) could help improve the CleverEngine’s win rate against the sector average. We used a sample data set from June 2003 to September 2018 and analysed over 500m data points. In short, the conclusion was that for now, the use of AI does not improve the CleverEngine’s already outstanding performance and win rate against the sector average over a following 12-month period.
The same analysis did, however, prove that Clever produced consistent results over that period as follows:
Clever outperforms the sector average 66% of the time; and 50% of the time Clever outperforms the sector average by at least 2.67%.
Incidentally, Professors Steve Thomas and Andrew Claire of the CASS business school found very similar results when they tested the CleverEngine as part of a research paper they produced on centralised investment processes in October 2014.
I wonder how many investment panels out there can say that they have produced such strong and consistent results?