Thursday, October 1, 2009

Measuring

Methodology, like sex, is better demonstrated than discussed
- E.E. Leamer

I'm reading Measurement, Design and Analysis by Pedhazur & Schmelkin for a course I'm taking and am finding it quite interesting. While we often address topics such as how to measure returns or risk, there is an entire discipline that addresses the broader subject of "measurement."

One thing the authors address is the average, something many of us calculate regularly. When it comes to performance, the reality is that in many cases, NO ONE gets the average return! Think about your GIPS composites: you show an asset-weighted return (which is an average), but do any of the accounts in the composite achieve it? In my experience as a verifier, it's not uncommon to find that no one does. This is one reason we require a measure of dispersion, to show the breadth of returns around this average.

There is such an allure, an almost magical quality,
in specialized terminologies, in formulas and fancy analyses
- Pedhazur & Schmelken

The authors suggest "that the choice of an analytic approach is by no means a routine matter," and I suggest that many of us wrestle with this on a regular basis. We are frequently told by firms that they have to report time-weighted returns, and sometimes I ask "why?" The typical response: "because GIPS requires it." BUT, in many of these cases GIPS doesn't apply, and yet they believe they're bound to the tradition of using time-weighting, even when it doesn't make sense! To paraphrase these authors, "returns and risk measures are generally presented with little or no attention to substantive context, the characteristics of the manner in which they are applied, or the properties of the measures used." To extend my paraphrasing a bit further, "knowledge of the methods and analytic approaches employed is essential for critical evaluation of a performance or risk report." But how extensive IS this knowledge?

The authors cite D.A. Freedman's exhortation to "start a new trend." Well, Steve Campisi, Stefan Illmer, and a few others have joined me in an effort to do just that, regarding at least the way we measure returns, and more likely much more. As I continue to make my way through this 800 page book, don't be surprised if you see further references to it in the future.

p.s., if you're wondering how the lead quote fits in, I can't recall ... I'm sure I had a good reason for using it, other than it sounded neat. Plus, it gave me the opportunity to show off one of my favorite clip art pieces!

1 comment:

  1. I'm not sure if you're referring to comparing the differences between time weighted versus IRR.

    I do, however, want to make this claim to the viewers of this blog (if I'm the first one to make this claim, I'm happy, if not so be it) that every performance calculations has their own limitation(s) and "daily" time weighted method shouldn't be noted as something that is more accurate than other monthly frequency or calculation methods.

    The limitations rest in the following areas: pricing data, accounting data and the equation (by the way, all performance calculations require these input).

    Finally, I also want to comment about the quote. I believe the reason specialized terminologies are invented is because it help us differentiate between someone we "THINK" that knows something versus knowing nothing. By no means a someone knowing more terminologies would make this a smarter person.

    ReplyDelete

Note: Only a member of this blog may post a comment.