Monday, December 28, 2009
Standard Deviation ... a risk measure or not?
First, is it a risk measure? It depends on who you ask. It's evident that Nobel Laureate Bill Sharpe considers it to be one, since it serves this purpose in his eponymous risk-adjusted measure. Our firm's research has shown that it is the most commonly used risk measure.
And yet, there are many who claim that it does anything but measure risk. What's your definition of risk? If it's the inability to meet a client's objectives, how can standard deviation do this? But, for decades individuals have looked at risk simply as volatility.
As to volatility, is it a measure of volatility or variability? In an e-mail response to this writer, Bill Sharpe said that the two terms can be used in an equivalent manner.
The GIPS(R) (Global Investment Performance Standards) 2010 exposure draft includes a proposed requirement for compliant firms to report the three year annualized standard deviation, which appears to have survived the public's criticism and will be part of the rules, effective 1 January 2011. But, will it be called a "risk measure"? This remains unclear.
Interpreting standard deviation is a challenge, since the result's value will vary based on the return around which it's being measured. Example: your standard deviation is 1 percent; is this good or bad? If your average return is 20%, then to know that roughly two-thirds of the distribution falls within plus-or-minus 1% doesn't seem bad at all, but if your average return is 0.50%, then doesn't 1% sound a lot bigger? In reality, it's better to use it to compare managers or a manager with a benchmark. Better yet, as part of the Sharpe Ratio, as this brings risk and return together.
I could go on and on, but will bring this to a close. Bottom line: it's easy to calculate (if we can agree on how (didn't address this today)), in common use, and has a Nobel Prize winner's endorsement. Will it go away? Not a chance. If you're not reporting it, you probably should be.