Earlier this week I mentioned that I have finally sent my comments in re. GIPS 2010. Well, I find myself slightly modifying my view about the planned requirement for a 3-year annualized standard deviation ... okay, maybe more than slightly.
I'm working on a research article regarding this very topic (annualized standard deviation) as I contend that it is a misleading number which is challenging and difficult (and impossible?) to interpret. But, failing to have empirical evidence to support this view, I was at a loss to criticize the proposal. Well, one of the articles I am using for my paper is very much opposed to standard deviation: Brett Wander& Ron D'Vari, "The Limitations of Standard Deviation as a Measure of Bond Portfolio Risk." The Journal of Wealth Management. Winter 2003.
Don't be mislead by the title, the authors' criticism of standard deviation goes beyond merely its use with bond portfolios, although the specific issues with bonds provide additional concerns about the measure. The breadth of criticisms about standard deviation should cause us to wonder if this measure deserves to be "the chosen one" for the very important GIPS® standards. As a colleague recently explained, by requiring the use of standard deviation the standards are implying that it's the best risk measure, a suggestion many would challenge.
You're no doubt familiar with many of the criticisms of standard deviation: it requires a significant number of a data points, assumes a normal distribution, treats the above average returns the same as the below average, and so on. These authors bring up other issues, such as the questionable statistical significance of the measure, stating that even “five years of monthly data will only provide marginal statistical significance.”
In spite of the measure's shortcomings, given that it's commonly used (as per our firm’s research), easily obtained, and generally understood, there probably was no harm in employing it. Well, my research will challenge the last point and the authors question its overall validity as a measure.
So, what are we to do? Perhaps the standards should simply require a measure of risk, just as they require a measure of dispersion. The likely response would be “well, how can a consumer compare two managers if they use two risk measures? The simple answer could be “that’s not our problem”; the same challenge exists with dispersion (one manager might use high-low while another uses standard deviation). One response could be to suggest that they can, of course, require the managers to report risk using the same measure. But that would be between the prospect and the contenders for its business.
Bottom line, finding an appropriate risk measure is a difficult issue to deal with and there is no simple solution. While standard deviation may appear to be the solution, it’s fraught with its many inherent challenges that should cause us to pause before rushing forward. I remain somewhat ambivalent on this matter and so expect more to follow.
By the way, if you visit the GIPS website (www.gipsstandards.org/news/releases/2009/view_comments.html) you’ll find only 14 comment letters so far. Don’t wait too long to get yours in...it WILL count!
Friday, June 5, 2009
Subscribe to:
Post Comments (Atom)
No standard can be perfect and no standard can represent what all people believe is important. The idea of a standard is to provide a level of confidence and comparability. Maybe the question that you should be asking is: “What is the purpose of the standard?” Does it make any sense for a company that has only mutual funds to have composites of one fund? Of course it doesn't, but to make things consistent, that is what is currently required. The performance calculated is after all only a sample of what may have been produced using the same strategy given only slightly different circumstances and assumptions. It includes some things that are luck and some that are skill.
ReplyDeleteManagers are still encouraged to include other risk measures. By including it in the standards, standard deviation is not being identified as the best measure of risk, but is a common measure. By requiring a 3 year number, you improve this comparability. I do agree that showing a three year number going back in time seems a bit arbitrary.
If you are going to eliminate the standards ability to identify a common measures of risk, you might as well eliminate the idea of standards all together. If anything, the standards did not go far enough to include things like tracking error, expected tracking error and Sharpe ratios. While they too have problems, none of these are any worse than the issues of looking at past performance in general. The underlying principle is still to present the results as fair representation of the of the strategy.