verification client called me with the following question: they have historically calculated dispersion around the composite's return; however, their new GIPS(R) (Global Investment Performance Standards) system measures it around the average of the accounts that were present for the full year. Which is better?
To clarify: GIPS compliant firms are required to include a measure of dispersion (e.g., standard deviation, range, high/low, quartile) for each year, provided there were six or more accounts present for the full year (if there are less than six, then it's an option to include dispersion). The composite's annual return is based on the monthly returns, which are linked together. Each month can have a different mix of accounts, because, for example, accounts were removed because they: terminated, fell below the minimum, had a significant cash flow, had a change in strategy, or are now non-discretionary; or accounts were added because they are new, rose above the minimum, returned after removal because of a significant flow, are no longer non-discretionary, and so on.
The only accounts that are used for the dispersion measurement purpose will be those that were present for the full year. If we calculate standard deviation against these accounts themselves, without any reference to the composite's return, then dispersion will be measured against the average of these accounts, which may not (and probably will not) be the same as the composite's annual return. To measure standard deviation against the composite's return, one would have to manually, so to speak, step through the standard deviation formula, inserting the composite's average into the equation, rather then allow the formula (e.g., Excel's STDEVP) run by itself. This would require more effort. You can get differences in results, as you might expect. Here's a quick example:
And so, is either approach okay? Is one method preferred?
The standards do not speak specifically to this question. I would say that both approaches are acceptable. However, I believe dispersion is expected to be about the composite's average (we want to know how actual accounts varied relative to the reported return). But, I suspect that most systems measure dispersion relative to average of the account returns, not the composite's return. In the end, the differences are probably immaterial.