Wednesday, October 27, 2010

There are no original questions, only original answers

This thought occurred to me when a few of us were wondering about a particular issue, and thought of posting it on Google. Through experience we have come to realize that whatever question we come up with, someone has asked it in the past. And while there may not be consistency in answers, there should at least be some help available from some source.

Sometimes, when asked a question, a person will speak "with authority," as if surely their answer is the only correct one, when in reality it's merely their opinion. I am no doubt guilty of this, myself, and should qualify some responses with "in my opinion."

I am finding that in academic writing, when making a case, the author/researcher must draw upon the existing body of knowledge to support their position. It is not unusual to find well in excess of one hundred sources referenced in these articles. As I pursue my doctorate, I am faced with this challenge, and will be drawing upon much of what exists in the performance literature (as my topic, no surprise I'm sure, is performance-related).

If a person only quotes materials that he/she has written as authoritative proof of their position, one might be understandably skeptical. This doesn't necessarily mean the person is wrong; just consider that there's minimal evidence to support their claim. On the other hand, having loads of documents to back one up may not necessarily mean that they're correct, either. This, of course, can make this analysis quite frustrating.

When we take over from another GIPS(R) (Global Investment Performance Standards) verification firm we often find situations where mistakes were made. Occasionally, when we offer a different opinion, the client may think that it's merely a "matter of opinion." This, of course, might be true. Fortunately with GIPS we have the standards themselves, guidance statements, and Q&As to turn to for support. But these may still not be sufficient, in which case it's necessary to be able to build a case for a position which is close to being irrefutable.

We receive questions on almost a daily basis. Oh, and contrary to this post's subject, some of the questions are original! Our industry is still, in a sense, in its infancy with much still to be discovered. Consequently, the answers aren't always obvious. And, our answers might change as new information is presented or discovered. This, I believe, adds to the enjoyment of performance measurement.

Tuesday, October 26, 2010

Consistency in performance

I recently came across the attached clip from USA Today, which was actually published several years ago (I had saved it). While it may be a bit hard to read, what we are seeing is that one of the judges asked a contestant to "tone it down," which he did. The following week he was criticized for not "exuding more." What's the point? Inconsistency.

Years ago, while attending an ROTC officer training summer camp, I experienced similar swings in directives from the officer who was in charge of my group. There was a huge swing in his approach the second time I was evaluated compared with the first. This was frustrating and impossible to deal with.

With performance measurement, consistency is often thought of as an important criteria to employ.  Changing the rules in a conflicting manner can be a huge problem.

That doesn't mean we can't make changes, but they should be understood, rationalized, justified, and communicated. For example, going from Brinson-Hood-Beebower to Brinson-Fachler for your equity attribution will result in some pretty big changes in the allocation effect. Knowing this, understanding it, and communicating what may occur is very important.

Introducing money-weighted returns isn't a contradiction with consistency because we wouldn't introduce it to measure the manager's performance; rather, it would be to supplement what is done and to provide the client with the return on their portfolio (i.e., how they performed).

Consistency is something to be mindful of when employing performance measurement systems and approaches.

Saturday, October 23, 2010

The art of writing is rewriting

As someone who loves to write, I learned a long time ago that there really is no such thing as "writing," per se; rather it's rewriting. One must be prepared to write, revise, and revise again. Often with letters and reports I do this many, many times. Today of course such acts are pretty simple, but before word processors they were much more challenging.

My last job in the Army was in the Field Artillery's Directorate of Evaluation, where as an operations research analyst I engaged in studies, which resulted in rather lengthy reports. Fortunately, I didn't type the reports: we had civilian employees who did this. But, because the best we had were IBM typewriters, when we made revisions the entire document (or at least the pages affected) had to be retyped. And revise and revise again we did.

Allegra Goodman discusses this topic in a WSJ article this weekend; I encourage you to read it. It's quite brief, and the investment will be a good one. She points out the even the best writers revise.

Often even with the Blog posts I revise and revise and still miss things. Take yesterday's post, for example. I read through it about four times, making slight changes each time, but still managed to miss an error in a calculation (a calculation I've written hundreds of times, mind you). Thanks to my friend Steve Campisi the error was caught and corrected.

We revise for a number of reasons:
  • to find errors, such as in formulas, grammar, word choices, and spelling
  • to eliminate text that we don't need
  • to make our writing better
  • to replace passive verbs with active ones
  • and on and on. We can always find some way to make improvements.
I rarely let something go without at least one review; and when I fail to take the time to read through at least once, I often later find out I had made an error. Even e-mails should have a read through or two; but unfortunately, in my haste to get something out it often includes a boo boo or two, which is regrettable and inexcusable.

I also love it when someone else reviews my work.  Many of my more important letters and reports get reviewed by one or two folks in the office. Granted, neither are English majors, but they both are pretty good at finding things, which is good. I also often ask my wife, Betty, to review my work, as she's great at catching things, but since she doesn't work for us I can't take advantage of her copy editing as often as I'd like.

And so, what does all of this have to do with performance measurement? Not much, but let's simply call it a weekend diversion, sparked by Ms. Goodman.

Friday, October 22, 2010

Which version of Modified Dietz is better?

This week I'm reviewing a software vendor client's system and saw that they used a version of Modified Dietz which we typically don't see. Here's the normal form:
and here's what they use:


In reality they will provide the same result so it really shouldn't matter which we use, right? In fact, I was first introduced to the second formula way back in the 1980s. I prefer the first version because I think it's more intuitive. What are we doing in the second? Does it make sense? Can you explain it?

I have reflected on the first form quite a bit and think its meaning is clear:

To me you can rationalize what is being done; not so easy with the first version. I think the first is a bit more challenging to implement, too, so I vote for #1! How about you?

"We calculate IAW the GIPS standards..."

The first thing you may be wondering is what "IAW" stands for; and no, it's not an abbreviation that is being used by twitters or text messagers (at least not that I'm aware of). The military loves abbreviations and acronyms (oh, and they're not always the same thing), and long ago when I was in the army I frequently used IAW to mean "in accordance with." I'm not trying to encourage its use ... just decided to slap it in here for a change.

Okay, now to the topic. The GIPS(R) standards (Global Investment Performance Standards) prohibit firms from stating that their returns are calculated according to GIPS, except when reporting to clients. But what if you're asked "are your returns calculated in accordance with the GIPS standards?" Must you say "sorry, but under penalty of law (or the nearest thing) I refuse to answer! Nah. Of course you can answer. But you shouldn't be stating this in your own materials.

What about software vendors? Nothing has been issued regarding this group, but I think they should be considered a "special case." They're selling software to help firms comply with the standards. And consequently, it's imperative that they be able to state what aspects of their software conform to the standards. And so if a vendor has a statement such as "our returns comply with GIPS," in my opinion, it's okay. What isn't okay is for the vendor to state that their software is "GIPS compliant." Only asset managers can comply; software is a tool to help them comply!

p.s., What did I mean that acronyms and abbreviations aren't always the same? Acronyms are abbreviations that can be spoken as a word, for example GIPS, RADAR, and NASCAR. Since not all abbreviations can (e.g., IAW, COB, NLT), not all are acronyms!


p.p.s., I just used two other abbreviations we, in the military, used: COB and NLT. It wasn't unusual to end a memo, for example, by saying "Your response is due NLT COB Friday.": meaning "no later than" "close of business."  Just a bit of useless trivia as we end the week!

Thursday, October 21, 2010

But there's no difference in the numbers!!!

In addition to doing GIPS(R) (Global Investment Performance Standards) verifications, we also conduct "non-GIPS" verifications, for firms or individuals who can't comply with the Standards, but still want their numbers reviewed. And occasionally we discover return methods which seem a tad irregular. For example:
  • a calculation that treats all cash flows as if they occur on the first day of the month (even when they don't)
  • a Modified Dietz implementation that only values the portfolio once a year (i.e., that weights the flows across the full year).
You're probably not surprised to learn that we reject these methods, and insist that the client use a more standard method. They are understandably frustrated, however, when they find that the results aren't very different from what they had previously calculated. This can be because their composite has a lot of accounts, because there aren't that many flows that occur, or for some other reason. One client, before fully implementing the change, did a sample test and found the results to be relatively close to what they previously had. This, of course, makes us a bit uncomfortable: do we say "okay, you've proven that your method works, so don't worry about it" or "sorry, your method is still faulty regardless of the comparability of the results, so you have to extend the exercise to the full set of portfolios"?

One could no doubt construct an argument for either approach. We have taken the more conservative one and require the client to fully implement a more appropriate method. To "sign off on" a verification report that used an irregular method would lend credence to the method and serve as an endorsement for it, which would totally conflict with the industry's clear aim to provide the most accurate information possible. We don't insist that clients who don't comply with GIPS adhere to the GIPS rules, but we do expect them to use methods that have been deemed acceptable.

Tuesday, October 19, 2010

GIPS & UMAs ... perfect together?

We have had a few clients raise questions of late regarding how UMAs fit within GIPS® (Global Investment Performance Standards). The two major questions: (1) Are these accounts to be included in composites? (2) Are they included in firm assets?

"UMA" stands for Unified Managed Account, and can be viewed as an extension, if you will, of a wrap fee account, in that it’s a separately managed account (as opposed to a pooled account or mutual fund), where the investor is typically participating in a sponsored program that’s offered by broker/dealers and other financial institutions. They typically allow the client access to multiple managers who are providing management of various investment strategies.

UMAs differ from wrap fee accounts in that the managers usually provide the sponsor with their model, and it’s up to the sponsor to execute the trades associated with the model. And unlike wrap fee programs, the investor isn’t a client of the manager; rather, they’re a client of the sponsor and don’t have a direct relationship with the manager.

When these types of accounts (that is, where a manager provided their model to a third party to execute for their clients) first surfaced, the answers to how GIPS fit in were pretty clear: they didn’t! That is, if a manager merely provides their model to another manager or a sponsor who is then responsible for executing it, the manager has no way of knowing (a) whether it was, (b) whether it was done properly, or (c) when it was done. The manager receives a fee (which is often tied to the amount of assets) for their model, but they have no direct oversight for the assets. Therefore, the accounts aren’t to be included in composites and the assets aren’t part of the firm’s asset’s under management: these are “advisory assets,” and are therefore excluded from the firm assets.

Of late we’ve seen a graying of the lines occur, where it appears that some programs place the managers into a role where they “may have discretion” over the assets. They again pass their model onto the sponsor, but it’s understood that the sponsor will execute it in a timely manner. In these cases, the UMAs are looking a lot like wrap fee relationships, and one could argue that the accounts are to be in composites and the assets are part of the firm’s AUM.

These programs afford us an opportunity to step back and perhaps simply consider when an account would or would not be required to be in a composite, and whether or not the assets are firm assets. Here are some tests which might help:
  1. Has the client signed an agreement with the manager directly, whereby the manager assumes (full or partial) discretion over the client’s assets, or is the manager formally defined as a “sub-advisor” to the sponsor?
  2. Is the sponsor obligated to carry out the manager’s model directions, including trading and rebalancing?



I originally thought there would be more issues, but believe these two should suffice. If you can answer “yes” to both, then you’d be obligated to include the account in a composite (unless they fail to meet the firm’s discretionary policy, of course) and the assets in the firm’s AUM. Granted, I think it would be difficult to see a situation where you’d answer “yes” to the first and “no” to the second, but that’s not really an issue. The manager must have confidence that their trades are being executed and that they have clear responsibility for the assets.

Can discretion be shared? Yes, it can, without obviating the manager’s responsibilities under GIPS.

Recall that in the world of wrap fee, the compliant firm can view the sponsor as the “client,” and the same would hold here, too. This simply means that the additional workload of shadowing client account assets may not be necessary. If the manager relies on returns and other data (e.g., market values) that come from the sponsor, they must have confidence that they meet the GIPS requirements; otherwise, they will have to maintain the necessary records themselves.

Since I’m unaware of anything being written on this topic before, what I present here is simply my interpretation of the standards and how they would apply to UMA accounts. By all means I welcome your thoughts on this topic. Perhaps this will result in a dialogue, which I would welcome.

If your UMA relationships fail these tests, can you refer to these relationships in your marketing? Yes, you can! Just don't include the accounts in composites and the assets in your AUM. You can include a separate asset number  (e.g., "assets under advisement") and include a narrative that describes the extent of this business, in order to showcase how your models are used by others.

Some GIPS compliant managers want to include these accounts, because it increases  their assets under management while others would prefer not to, because it means more work. It really shouldn't be a matter of choice; there should be clear tests that would determine the appropriate treatment of UMAs. And of course, a manager may have some UMA relationships which would be included and others which wouldn't. The manager should document these relationships so that the rationale behind their decisions is clear.

Thursday, October 14, 2010

The Poseidon Effect

During our Fundamentals of Investment Performance class, when we get to the discussion of time- versus money-weighting, a question arises as to why firms continue to exclusively employ time-weighting, when it's quite clear that there's a major role for money-weighting. I typically use a scene from the 1972 move, The Poseidon Adventure, as a metaphor for one possible reason.

The scene occurs shortly after the boat has done a "180," and the major characters (Gene Hackman, Ernest Borgnine, Red Buttons, Shellly Winters, etc.) are gathered together along a corridor. Hackman (Rev. Frank Scott) tells them they need to go in the direction that is the complete opposite of where everyone else is running. Borgnine's character (Mike Rogo) challenges him, asking what makes him so sure since everyone else is rushing the other way. Hackman insists that the others are heading to their deaths because that direction won't provide a way out.

Now, if you use the wrong return formula surely death won't follow. However, our natural tendency to want to "go with the crowd" can cause us to miss out on doing things a better way. Avoid the "Poseidon Effect" and take advantage of what money-weighting has to offer.

Wednesday, October 13, 2010

Dispersion relative to what, exactly?

A client recently asked us a question regarding GIPS(R) (Global Investment Performance Standards) which we have heard in the past, and so I decided to post it here and offer a response.

Should the dispersion which is shown on a GIPS presentation be based on net or gross of fee returns?

Excellent question, I believe. The Standards, to my knowledge, don't address this nor have I been able to find any Q&As on it. And so I would say that it's "open to interpretation." In reality, it really shouldn't matter that much, assuming that the fee percentage is relatively consistent across the period, we wouldn't expect to see much in the way of a difference between the dispersion for gross or net-of-fee returns. The differences would be de minimis.

Must you disclose across which return it's measured? There is no requirement to do this, though it would probably be advisable. Since you probably have a statement in your disclosures which reads something like "Dispersion is calculated using standard deviation," to amend it with "Dispersion of gross-of-fee returns is calculated ..." wouldn't be difficult.

What would I recommend the dispersion be relative to? Gross-of-fee returns. I place little benefit in net-of-fee returns as they don't provide the same value to the reader as the gross returns (because most firms have a mix of fees in place, so the net return is difficult to decipher; one can always take the gross return and adjust it by the fee they expect to pay to arrive at the approximate net return they would have had). But again, it's up to you to decide.

Tuesday, October 12, 2010

It's comment time again...

There are three proposed revisions to GIPS(R) (Global Investment Performance Standards) guidance statements that you may want to comment on:
  • Private equity
  • Real estate
  • Verification.
Since compliant firms are obligated to comply with the guidance statements, it's important that you're comfortable with the ones that apply to you. You can comment anonymously if you'd like.

Comments must be submitted by November 25, 2010.

Monday, October 11, 2010

Is the S&P 500 the right benchmark to show?

We conduct a lot of GIPS(R) (Global Investment Performance Standards) verifications, and often see managers use the S&P 500 as the benchmark for their composite. But this is often the wrong one to use.

For GIPS purposes the index should tie into the composite's strategy. The Standards' glossary defines benchmark as "an independent rate of return (or hurdle rate) forming an objective test of the effective implementation of an investment strategy." Would the S&P 500 serve as such a test for a US growth equity manager? Hardly, since half the index consists of value stocks which presumably wouldn't be in the manager's radar. The benchmark should be a way to judge how well the manager did, but a broad index doesn't properly serve this purpose.

Back in January 2000 a large mutual fund manager ran full page advertisements reporting how several of their funds had outperformed the S&P 500, and they provided the proof right there in the ad! However, the ad included funds that invested in small cap, European stocks, emerging markets, etc.; i.e., strategies which clearly didn't align with the S&P 500. While the ad seemed to imply that their funds had done well because they beat the S&P 500, in reality the sectors they were invested in may have beaten this index, and the mere fact that these funds were invested there caused them to also have higher returns. The test of the investment strategy would have been to compare the funds with individual indexes that aligned with each fund's individual strategy.

But does this mean that it would be wrong to show the S&P 500 in a composite, if the strategy doesn't align with this benchmark? Not necessarily. If there are no benchmarks that match the strategy, you may want to show a variety of indexes, including the S&P 500, to provide the reader with some comparative information, but you'd want to include an appropriate disclosure explaining why you're doing this. The manager isn't managing against the S&P 500, and therefore any out-performance can't be attributed to decisions to "beat" this benchmark. The manager may simply be invested in securities or sectors that aren't in the S&P 500, and by virtue of their idiosyncratic performance the manager is meeting with success (or failure).

Even if you do have a benchmark that matches your strategy, many managers still want to show the S&P 500 (along with the strategy benchmark) for comparison purposes: not to say "heh, we beat the S&P 500" but simply because this index is viewed by many as "the market."  But if you do this, you should explain why you've included this index to avoid any confusion.

Many clients want to see the S&P 500 on their reports as well as other broad market indexes, for comparison purposes, not as a "test" of the manager's success at implementing their strategy, since this index fails to do this. This is perfectly fine, too.

p.s., I recall meeting with a growth manager a few years ago who explained that he had previously used the S&P 500 as the index, but lately it hadn't done so well. Could it be that during the '90s, when growth was "all the rage," the value portion of the S&P 500 was dragging its performance down, while the manager didn't have such an encumbrance, but when the "tables were turned" and value was beating growth, the S&P 500 was now outperforming the manager and therefore didn't look so good? The S&P 500 was never the right index for this manager.

Friday, October 8, 2010

Why don't attribution effects link?

I was teaching an attribution class recently, and a student asked for a simple explanation as to why arithmetic attribution effects don't link (geometric do, which is one of its advantages over arithmetic). The person recognized that the effects don't link, as do their clients, but he was still wanting to have a pithy response to the question when posed by clients. Simply saying "because they don't" didn't seem to work.

Well let's consider two other items before we answer this question: returns and excess returns. Returns link, right? And why is this? Because returns compound. That is, returns build upon the performance of prior periods. If I start with $10,000 and have a 10% return in January, $1,000 is added to my value and the portfolio ends the month at $11,000. Then if I have another 10% return in February, I don't add another $1,000, but another $1,100, which is 10% on the initial $10,000 plus 10% on the added value ($1,000) in January; i.e., $1,000 + $100. Returns compound and therefore we link from month to month: arithmetic linking (i.e., simply adding returns together) don't take the compounding effect into consideration; thus we use geometric linking.

Excess returns (i.e., portfolio return minus benchmark return) don't link. Why not? Because excess returns themselves don't compound. And while the portfolio and benchmark returns compound, they may compound in different fashions depending on their individual results. But excess returns don't compound.

Attribution effects reconcile to excess returns, right? And if excess returns don't compound how can attribution effects? But we want to be able to reconcile to the linked period excess return, which is based on taking the difference between the linked period returns (what a mouthful!). We accomplish this through a smoothing technique, such as the ones developed by David Cariño, Jose Menchero, and Andrew Frongello (and no, you don't have to have an "o" at the end of your name to develop such a linking method (but it can't hurt!) The French group, GRAP, also developed a method to link attribution effects).

To summarize, attribution reconciles to excess returns. Unlike the returns themselves, arithmetically derived excess returns don't compound. Therefore arithmetic attribution effects don't compound. In the case of geometric attribution, their excess returns do compound, so the attribution effects compound, too.

Hopefully this makes sense, though I'm open to your thoughts.

Thursday, October 7, 2010

What does AUDITED data mean?

I have become increasingly aware of how the term "audited" is often used. Several of our clients refer to their performance returns as "audited" or "unaudited." But are they actually "audited"? In most cases, no. Clearly they're trying to communicate that they've actually been reviewed internally and reconciled with the custodial records (i.e., the "official books and records"). One of our clients mentioned that they use the term because that's what their custodian uses. Regardless, confusion and misinterpretation can arise from the use of such a term.

Perhaps "reconciled" doesn't convey the same meaning that "audited" does, but it's probably more accurate. But if you want to continue to use "audited, then it probably would be a good idea to explain what you mean by it, so that there is no misunderstanding. Especially in light of the Bernie Madoff scandal.

Tuesday, October 5, 2010

It's survey time again...



We have launched a survey that addresses the increasingly important topic of risk measurement. And while we've surveyed the industry on the presentation standards, attribution, performance measurement technology, and the performance measurement professional, many times, this is our first survey that deals exclusively with risk. We've teamed up with Leslie Rahl and Capital Market Risk Advisors. In addition, several vendors have signed on to cosponsor the research project with us.

Risk is an especially important topic right now, and so to gain some insights into how firms measure and manage it will no doubt be of interest to many.  All participants will receive a complimentary copy of the results. Please join in!

The survey can be reached through our firm's website or by going directly to the survey site.

p.s., Note that this is the first time we've done a survey using this approach (online entry), and so we also welcome your comments on it. Thanks!.

p.p.s., Confidentiality of participants will be maintained., though you can participate anonymously, if you'd like.

Mutual funds and currency return differences

During last week's GIPS(R) (Global Investment Performance Standards) conference in San Francisco, I had a conversation with a client who posed an interesting question. They have a US Equity mutual fund that until recently only had U.S. investors. However, a European client made a significant investment in Euros (which were subsequently converted to US Dollars and placed into the fund). To accommodate this investor a separate class was created, where returns will be reported in Euros. In addition, the client requested that a hedge be put on their investment, so a currency forward contract was purchased (to sell US dollars for Euros). The client had two questions: (1) do they place the Euro currency return into the same composite as the portion of the fund that has returns in US dollars (i.e., have accounts with Euro returns mixed with accounts with US returns), and (2) do they include the hedge in the composite?

First, because you might report performance to a client in their (base) currency will have no effect on how you include their account in a composite. How you report top a client is not necessarily the same as how you prepare your GIPS presentations. For example, a client may wish to see a benchmark which is different than you use in the benchmark; this is perfectly fine. Second, all accounts in a composite must have their returns in the same currency:  you can't mix returns of different currencies in the same composite. You can always convert a return from one currency into another, so this isn't an issue. In this case, the entire fund is the account. Third, since the hedge was placed at the client's direction, the firm could, through their documented policies and procedures, consider this a non-discretionary action.

By the way, in yesterday's Wall Street Journal there was an interesting article on currencies, especially in relation to mutual funds. We'll discuss this further in a future posting. In the mean time I'm sure you'll enjoy the article.

Monday, October 4, 2010

Benchmarks: manage or compare against?

I often get into discussions with clients and students about how benchmarks should be treated. For example, a few years ago I spoke with a hedge fund manager and suggested that he didn't manage against any benchmarks; he quickly corrected me saying he did. But then I asked if he managed against them or compared his results against them, and he acknowledged it was the latter.

Many hedge funds pick several benchmarks to compare their performance with; for example, the DJIA, CPI, Barclay's Gov't Agg, S&P 500; but do they manage against any of these? No! Hedge fund managers are typically absolute managers who don't manage against any benchmark.

But even when you do manage against a benchmark, it's important that the benchmark align with your strategy. A growth manager, for example, shouldn't manage against the S&P 500 as they would be missing the entire value half.

Benchmarks are a challenging topic, which we'll explore further.

Looking for the 2010 edition of the GIPS Handbook?

Well, you'll have a bit longer of a wait than I had anticipated. I thought the book would be available by year-end, but it now appears that it won't be ready until late 2011 and perhaps not until early 2012.

No doubt the massive changes to the GIPS(R) standards (Global Investment Performance Standards) has required a very detailed and time-consuming review of this important and valuable text. Anyone who is moving to GIPS 2010 will want a copy, but will have to rely upon the 2006 version of the guide, the new standards, and the updated guidance statements as they're released.

Friday, October 1, 2010

Are the days of remote verifications soon to be a thing of the past?

At yesterday's GIPS(R) (Global Investment Performance Standards) conference in San Francisco, someone asked a question about the legitimacy or appropriateness of GIPS verifiers who don't bother to show up at their clients' offices to conduct verifications. I wasn't in the room at the time and didn't ask the question (though it was a great one), but understand that the response, though not as forceful perhaps as I would have liked, did basically state that verifiers should be going to their clients' offices.

I equate remote verifications to the idea of getting a physical remotely. Can you imagine a doctor who told you that you didn't have to come in, but could simply respond to his or her questions over the phone? While the cost might be lower and the procedure more convenient, do you really think that the results would be beneficial?

The GIPS Verifiers/Practitioners subcommittee needs to memorialize this position  so that we can put a stop to this highly inappropriate practice.