Monday, September 20, 2010

How's your index data? Is it clean enough?

I've been having a discussion with a colleague on the topic of index data. I suspect that many asset managers don't give these statistics much thought, simply taking in whatever values they're given by their data providers. But should they?

Many performance measurement professionals recognize that there can be problems with index data. Some managers track the constituents themselves and identify problems when they occur. In some cases they reach out to the index provider and alert them. But do the vendors correct the problems? Sometimes, yes; but not always.

Given that indexes play an important role in performance measurement (for example, we use them in attribution, in client reporting, and in our GIPS(R) (Global Investment Performance Standards) presentations), should we be paying them more attention? Or, do we think that (a) the errors are immaterial or (b) that over time they'll work themselves out?

Any thoughts?


  1. I fall into the same terminology myself but I think it would be better if we used the term "benchmark data" instead of "index data", since any return series (manager or benchmark) can be expressed in the form of an index.

    The term "index" just means your cumulative return series has been normalized to some initial number, such as 100.

  2. This issue of the accuracy and/or comparability of benchmark data has always been a potential dilemma, especially for attribution. The obvious problem of categorization of assets between the vendor and the asset manager can be problematic as can the pricing of the securities in the benchmark versus the same securities in the manager's fund. Vendors usually solve this problem by using the same pricing for the benchmark and the fund.


Note: Only a member of this blog may post a comment.