Friday, March 30, 2012

John Simpson speaks at First Rate annual conference

This week, my friend and colleague, John D. Simpson, CIPM spoke at First Rate's annual user conference.

As I understand it, this event had a record attendance. And while I'd like to think that it's because John was there, I know there was much more to it (for instance, Steve Campisi, CFA spoke, too!). John's topic was "Trends in Performance Measurement."

We were pleased to have had the opportunity to once again participate in this program. We congratulate our friends at First Rate, and thank them for allowing us to join in.

John and I have spoken at numerous vendor events. Anyone wishing for John, Jed Schneider, CIPM,  FRM or me to speak, should contact Jaime Puerschner at 732-873-5700.

Thursday, March 29, 2012

Code of Conduct for GIPS Service Providers

At the recent GIPS(R) (Global Investment Performance Standards) Executive Committee meeting in Brussels, Belgium, the EC discussed the possibility of creating a "Code of Conduct" for GIPS service providers. I think this is an excellent idea, though at this point have zero knowledge of what is actually being considered.

In Tuesday's post, I mentioned how The Spaulding Group will not take on a verification client who we believe has come by their historical performance records through some improper means, and recommended that other verifiers adopt this policy, too. I would think that this is an example of the code of conduct one would expect from service providers.

When the Performance Measurement Forum set off to develop a certification program for performance measurement professionals several years ago (which contributed to the creation of the CIPM program), ethics wasn't a section we considered. The CIPM program wisely has included it, and more and more we can see how ethics is an important topic for our industry. It seems that almost daily we learn of infractions. Granted, the political world may be outpacing our industry in this regard, but it seems as if some would like to overtake them.

Performance measurement professionals can serve as a key gatekeeper to the delivery of fraudulent information. And while we haven't yet heard of any PMPs who have allowed the presentation of information they knew was wrong, these individuals can still find themselves being pressured to do something they know would be wrong.

Verifiers can serve as yet another group to try to halt the spread of fictitious information, by holding firm when we learn of something unethical and to simply refuse to be a part of it. My feeling is that if a firm is willing to act improperly to obtain historical records, won't they do the same when it comes to other situations that arise? It's better to avoid even going down the path with someone who may not behave as we think they should.

Wednesday, March 28, 2012

Turning lemons into lemonade


We just discovered that we had a printing error with the second edition of my Handbook of Investment Performance: we left off part of the bibliography.

Now, you might think this isn't a big deal, but for a firm that prides itself on the highest quality in all we do, we are quite upset, disappointed, and embarrassed. And so, if you purchased a copy, even though you may not have noticed the error yourself, just send us a note and we'll send you a corrected one (yes, we've undergone the expense of another printing to correct the mistake). You can keep the copy you have; we'll send you a replacement. All you need to do is ask for it.

As you might imagine, we're now stuck with a lot of books with this mistake. And so, we offer you this opportunity: you can get a copy of the version with the missing bibliography pages for a ridiculously low $20 (plus the cost of shipping), or you can purchase the now corrected and complete edition for $75. Note that the version with the missing pages is complete, except for the bibliography pages, which we'll include separately, so you'll still know what these items are. And so, our mistake can be an opportunity for you to obtain a copy of my book at a HUGE discount. Just let us know if you'd like a copy (or more; perhaps you'll want to get copies for your entire team at this price), and we'll send it (them) out!

Select the one you want by clicking the appropriate link below:

Tuesday, March 27, 2012

Learning from USMA

I served in the U.S. Army (Field Artillery branch) for nearly five years, and spent 39 months with the 25th Infantry Division in Hawaii (tough duty, but someone had to do it). During that time I worked with several West Point graduates (I obtained my commission through ROTC), and recall learning the "cadet honor code":

A cadet will not lie, cheat, steal,
or tolerate those who do

There's a great lesson here, is there not, and a great example for us all to follow?

An article in yesterday's WSJ ("Weitz Firm Got Rival's Database, Suit Says," by Dionne Dearcey) spoke of a lawsuit filed by a former employer for Weitz & Luxemberg, Joseph C. Maher, who claimed Weitz had "a cache of files from a competitor [Waters & Kraus] that allegedly could be used to earn millions of dollars." These records were supposedly brought to the firm by a former W&L employee of the competitor, who had joined Weitz. We have no way to know at this time where the truth lies, but if Maher's allegations are found to be true, why would a law firm hire someone who stole records from their prior firm? (Please, no lawyer jokes)

We had a conversation recently with someone about the GIPS® (Global Investment Performance Standards) portability rules, which require:
  1. Substantially all of the investment decision makers to be employed by the new or acquiring firm (e.g., research department staff, portfolio managers, and other relevant staff);
  2. The decision-making process to remain substantially intact and independent within the new or acquiring firm; and
  3. The new or acquiring firm to have records that document and support the past performance.
 (See ¶ I.5.A.8, Global Investment Performance Standards. 2010)

In most cases, managing to meet the first two requirements is a lot easier than meeting the third. And so, what is a person to do to get the records, especially if they are leaving in less than an ideal way?

Well, if they are a CFA charterholder, stealing the records would be considered an ethics violation; but what if they aren't a charterholder, can they steal them? Of course they can; who's to stop them (unless they get caught or sued, of course)? But would that not still constitute an ethics problem?

Last year The Spaulding Group adopted a Standards of Practice, based on the CFA Institutes's, and appointed both a Chief Ethics Officer (John Simpson, CIPM) and Assistant (Jed Schneider, CIPM, FRM). And we made the decision that we will not accept a verification client if we suspect they obtained their historical records through some improper means. Yes, it is tempting to copy records in order to achieve compliance; but such action says something about the character of the individual(s), and we would prefer not to include them among our clients. We encourage all GIPS verifiers to adopt a similar rule. As USMA (United States Military Academy) proclaims, "...or tolerate those who do." And we won't.

Friday, March 23, 2012

Have you been a victim of "planning fallacy"?

In his recent book, Thinking Fast and Slow, Economic Nobel Laureate Daniel Kahneman speaks of the "planning fallacy," a term he and his former collaborator, Amos Tversky, coined "to describe plans and forecasts that are unrealistically close to best-case scenarios [and] could be improved by consulting the statistics of similar cases." In "forecasting the outcomes of risky projects, executives too easily fall victim to the planning fallacy. In its grip, they make decisions based on delusional optimism rather than on a rational weighting of gains, losses, and probabilities. They overestimate benefits and underestimate costs. They spin scenarios of success while overlooking the potential for mistakes and miscalculations. As a result, they pursue initiatives that are unlikely to come in on budget or on time or to deliver the expected returns—or even to be completed." Sound familiar?

I first encountered this situation roughly 35 years ago, when designing a system for a client. My estimates said it would take two years, but the client said they needed it done in less than one. I said it couldn't be. Shortly thereafter I left [on my own, I might add] before the project commenced, and never learned of its outcome, though I am confident that even two years was an optimistic guess.

A few years back, a client asked us for an estimate to conduct a GIPS(R) (Global Investment Performance Standards) verification for them. We knew the client well, and what their status was vis-a-vis the Standards, and were surprised they were asking for a proposal, given the huge gap they had before they'd be ready to even consider compliance. We lost to a firm that gave them a very low price, and confidence that they'd be ready to be verified in three months. This firm won the assignment partly because of pricing, but perhaps more because of their confidence in getting them into compliance in but a few months time [since that time, bringing the client into compliance and then verifying have been clearly deemed in violation of verifier independence]. Their optimism was impressive, but failed to materialize. Too often scenarios like this play out: the consultant promises to complete a project earlier than is realistic and wins the assignment. This is not to suggest that the competing firm necessarily knows they're providing an unrealistic estimate, but their optimism still results in a win, when their competitor offered a more realistic plan.

I have, on occasion, been a victim of the planning fallacy [who hasn't?]. I estimate what is needed to complete a project, and believe [with confidence] I've done an effective job in identifying risks, only to fail to see the multitude of events that might arise that introduce delays.

More than 40 years ago, Fred Brooks coined the term "mythical man-month" (I guess today, an enlightened and politically correct person would make it "person-month") addressed the false belief that adding resources will speed up a project. The idea of placing nine women on the job of having a baby should result in one in only one month (since it takes one woman nine months to produce one) serves as a good metaphor for the concept.

Projects are routinely missed; deadlines exceeded; costs surpassed. The notion of referring to a "base case" of past performance as a guide is helpful to try to develop more realistic estimates, be they the time to achieve compliance or build a new system. Being at least aware of the "planning fallacy" can be helpful when beginning projects, to try to enhance the estimates and likelihood of success.

Thursday, March 22, 2012

Performance examinations: when should you have them done (and when absolutely not)?

By now, if you're a regular (or even infrequent) reader of this blog and/or The Spaulding Group's newsletter, you know of my dislike for GIPS(R) (Global Investment Performance Standards) performance examinations. I have commented at length as to how compliance with the Standards and having annual verifications done are investments, but that in most cases, examinations are an expense or cost that should be avoided. But are there times when they should be done?

Yes, of course!
  • If the firm believes they have value! To put it simply, if the firm disagrees with me and feels that this exercise provides them with benefits, then by all means, have them conducted.
  • If a prospect virtually mandates that the composite(s) that align with their strategy have them done, and you feel that by having them conducted, you'll stand a better chance of winning the business
  • If you find that for your primary composites the market fairly often inquires into whether or not examinations are done.
We've told our verification clients that we'll come in immediately, even over a weekend, if they require an examination to be performed (no one has yet taken us up on this offer). Until that time, most of our clients avoid the expense.

Are there times when they should absolutely NOT be done? Well, one particular case comes to mind:
  • For non-marketed composites.
Note that the GIPS standards do not speak of "marketed" and "non-marketed" composites, but the industry surely understands the concept. We see absolutely no need to have examinations performed for non-marketed composites. By sheer virtue of their status, any possible benefits are nonexistent, are they not?

We know that some firms do have them done, but don't understand why. If you do, please let me know the reason(s) why. If you're a verifier and conduct them, chime in, too! And, if you have them done but don't know why, ask your verifier and tell me what they report, as I am curious as to the benefits they provide you for the costs involved. Thanks!

Tuesday, March 20, 2012

The value of subjective judgment

I must confess that I am enjoying Daniel Kahneman's Thinking Fast and Slow quite a bit; so much so that not only am I listening to it (via my Audible.com account) but am referencing it, too (via my Kindle download!). And so, this affords me the opportunity to capture passages which I find interesting. And here's one example:

“subjective confidence
is a poor index
of the accuracy of a judgment.”

This relates to the idea that we too often think  that  our decisions, based on our expertise, should rule the day, without bothering for any objective analysis. Too often pundits, be they of the performance measurement professional, political, sports, or some other variety, tend to speak as if their opinions, which is all that they really are, are somehow factual. I am no doubt guilty of this myself, thinking that my judgment is sufficient to know what is true and factual. And while I'd like to think that more often than not I am correct, there are also times when I err.

When hearing someone pontificate about a subject it is wise to discern whether or not they are being presented with opinion or fact, derived from objective analysis. This holds true in all walks of life, including performance and risk measurement.

Friday, March 16, 2012

Simple Question: What is a Cumulative Return?

I'm conducting a software certification for a client, and reviewing their documentation, which includes a statement that begins, "If you have a cumulative return..." However, they fail to define this term. And so, I will offer my thoughts. But, I decided to check out how others refer to it:
  • Investopedia: The aggregate amount that an investment has gained or lost over time, independent of the period of time involved.
  • Russell: A compounded rate of return covering more than one year.
  • eHow.com: how much money [investors] are making on the principal amount they invested
  • Center for Research in Security Prices (CRSP): a compounded return from a fixed starting point
I don't particularly like any of these definitions.
  • "Aggregate amount"? We're talking percentages!
  • Why limit to "more than one year"? Can't we have a six month cumulative return?
  • "How much money"? We're talking returns!
  • Sounds very technical ("from a fixed starting point"; as opposed to a nonfixed starting point?)
"Cumulative" has the same root as "accumulate." If we turn to Dictionary.com we find the following for "cumulative":
  1. Increasing or growing by accumulation or successive additions: the cumulative effect of one rejection after another.
  2. Formed by or resulting from accumulation or the addition of successive parts or elements.
We generally contrast cumulative and annualized returns. And so, I would say that "a cumulative return is the nonannualized return for any given period." Of course, we don't annualize for periods less than a year, but that doesn't prohibit us from having a six month cumulative return, does it?

Thoughts? Chime in!

Thursday, March 15, 2012

Simplifying a data problem

We have a GIPS(R) (Global Investment Performance Standards) verification client who uses Advent's Axys portfolio accounting system. Most of their clients are at Schwab, and they have a direct feed from Schwab to Advent. However, they had a couple accounts elsewhere, and hadn't included them in their composites, because they hadn't added them to Advent. This was a problem that had to be addressed.

They reached out to Advent, and were apparently told that they would have to add everything for each account for each time period, meaning market values and transactions. This would be a monumental task for our client. But, life doesn't have to be so challenging. Before you continue to read, reflect on how you would handle this. [pause]

For GIPS, we don't care about subportfolio activity; just market values and external cash flows: that's it! But how can we get this onto Advent?

SIMPLE!!!

For each account, assign a unique dummy (fictitious) security, with them owning just one share. The security's starting value is ...

[drum roll]

...the starting value of the portfolio! For example, if the portfolio begins with $513,078.22, then the security is worth $513,078.22, and they have one share, meaning their market value is $513,078.22.

What happens when a cash flow occurs? Enter the flow on the date it occurs.

Subsequent months, whatever the broker/custodian tells you is the market value becomes ...

...the price for the security! And so, if the next month the portfolio is worth $538,135.78, then this is the price of the security. And since the portfolio owns only one share, that's what they're worth. The only caveat here! Since they may have brought cash in, then the price of the security has to be the market value, minus the cash amount. Likewise, if there is a cash outflow, they will have to adjust the security's price, so that cash is handled properly.

[i.e., the market value from the statement must equal the price of the ficitious stock, minus the value of cash, meaning (algebraically derived) the share price equals the market value of the statement minus the cash value!]

Two issues remain!

(1) As of January 1, 2010, GIPS compliant firms must revalue their portfolios for large external cash flows, even those firms who use the aggregate method to derive composite returns (which Advent uses), even though this method doesn't use the underlying portfolio returns. So what must they do? IF they discover that large flows occurred, then they would have to revalue the portfolio on those days, and consequently set the fictitious security to this value (plus or minus the cash flow amount).

(2) ALSO, the "large cash flow rule" applies to the composite, too, meaning that if the composite has a large flow, the entire composite is revalued. HOWEVER, given the size of their composite, the likelihood of it (the composite) experiencing a large flow is infinitesimal.

Make sense?
Can you think of a better way or a flaw in my method? Let me know!

Wednesday, March 14, 2012

It's pi day!

I thank my wife for reminding me that today is "pi day." It should not go without notice.

"Pi day"???

Yes, you remember pi, right?

Recall that pi is a constant that is the ratio of any Euclidean circle’s circumference to its diameter. It's value is approximately 3.14.

Thus, today, March 14, is pi day! (3/14). Get it? Celebrate! Eat some pie!

p.s., I had a geometry teacher in high school who I absolutely adored; he really loved math. He referred to a girl's sorority: "eta bita pi."
Okay, maybe it's not that funny, but it was back then! 

Justifying claims and beliefs

I began listening to Thinking, Fast and Slow by Daniel Kahneman this week, and am finding it to be both interesting and motivating. Daniel Kahneman is a Nobel Prize winner. In this book he discusses various studies that he and his colleagues conducted.

It occurred to me that in our field of performance measurement, unlike other disciplines, it is quite common for folks to make statements, as if they're facts, without any supporting research to back them up. And so, what this amounts to is just opinions, which may or may not hold any truth. The speaker or writer should therefore qualify their statements with something like "in my opinion" or "it's my belief," so as not to mislead the listeners/readers. Don't get me wrong: I have no problem with people having opinions. Heck, I've even been known to have them once in a while. But we must distinguish between opinions and fact; conjecture and something that's been proven.

Take transaction- versus holdings-based attribution. Until the research I began a year ago, I'm unaware of any objective analysis that had been done to justify the claim that one method is superior to the other. (By the way, I will provide additional insights on my research at this year's PMAR conferences).

Someone recently sent me an email that included the following statement:

"You often criticize without substance behind your arguments and
without offering and constructive alternatives."

I must confess that I was neither offended nor upset by this statement; rather, I was perplexed, surprised, and a bit befuddled. The statement is both ironic and invalid. First, the irony is that the fellow who sent it made a claim himself without a single example to back it up! Second, I can cite numerous examples to the contrary. A couple recent ones: (1) in my criticism of the aggregate method, I went to great length to provide several examples of how it fails to provide a valid return, pointed out that it conflicts with the definition of "composite return," and offered preferred methods; (2) in the case of my objection to asset-weighted returns, I provided examples and the clear benefit for equal weighting.

Nevertheless, the statement raises a point that all of us who make claims or voice strong views should be mindful of: the need for objective analysis and a recommended alternative.

Can you imagine a mathematician stating that "there are an infinite number of double prime numbers" without offering a proof? If they state "I believe" in front of their claim, that's fine, but to make the statement as if it's fact would be laughable.

Introducing some discipline into our field would, I believe, be an improvement. Back to Kahneman's book, he's motivated me to begin some new research, which I hope to do shortly.

Tuesday, March 13, 2012

It pays to subscribe to Dictionary.com's daily word

One of my favorite websites is www.Dictionary.com. I reference it frequently, when I come upon a word I'm unfamiliar with, or want to verify the meaning of a word I plan to use. I also subscribe to their "Word of the Day," which is usually a word I've never seen. A few weeks back the word was filiopietistic (fil-ee-oh-pahy-i-TIS-tik). It's an adjective "pertaining to reverence of forebears or tradition, especially if carried to excess." You may already be seeing where I'm going here.

Excessive reverence for tradition.

As in ...
  1. Our devotion to, and love affair with, time-weighting
  2. Our need to use the (flawed) aggregate method to derive asset-weighted composite returns
  3. Our fixation on asset-weighted returns, rather than the much more meaningful equal-weighted variety.
Oh, well.

There's a saying you may be familiar with:

"you can't teach an old dog new tricks."

Well, I'm 61, and I'm open to change and new tricks. Guys close to half my age refuse to budge.

And, to quote another saying, go figure.

Monday, March 12, 2012

What is it about numbers that end with 0 or 5?

I've commented in the past about anniversaries, and how we tend to give greater emphasis to ones that end with the number five or zero. My wife and I will celebrate our 40th wedding anniversary this November (we were very young when we wed, as I can't be THAT old!), and this will therefore be a special occasion. The Journal of Performance Measurement was 15 years old last year, and we highlighted that publishing year with a specially designed logo for each issue. We held the 50th meeting of The Performance Measurement Forum last year, and celebrated with a commemorative photo album. And so on.

This May will see the 10th annual Performance Measurement, Attribution & Risk conference. And so, we are trying to give this occasion greater attention, too! Our event logos have been kept pretty static since we first began the program (other than altering the dates), but we wanted something extra special. And why? Well, we, like just about everyone else, feel that a 10th anniversary counts more than a 9th (or, for that matter, an 11th!).

You have to credit Disney because they seem to have introduced their theme parks in such a way that every year is a similar celebration for at least one of their parks. We are all guilty of giving greater  emphasis to the 5th, 10th, 15th, 20th, 25th, etc. anniversary. Of course, this all stems from our use of the decimal number system; if we used a hexadecimal system, then events that occurred in the 8th, 16th, 32nd, etc., would probably garner more attention.

I love numbers (thus my love of mathematics, in general), and find  much of this fascinating. And as far as PMAR X, hope you can join us as it will have record attendance. And although PMAR Europe hasn't yet hit such a notable anniversary (the third annual is this June in London), it, too, is expected to be a grand event. PMAR is a place where remarkable things happen, and we're sure you'll want to be a part of it. Please contact Patrick Fowler with any questions you may have about these events.

Friday, March 9, 2012

Writing reports

I think Susan Weiner's blog is a must place to visit for anyone who writes (and who doesn't?).

In a recent post she discusses a new book, Reader Friendly Reports, by Carter A. Daniel. I have been writing reports on a regular basis for over 40 years. Some are short (a few pages) while others can be quite long (in excess of 100 pages). And after doing all this writing, I suspect that I'm pretty good at it. But, in reality, I can probably do a better job. And so, I just ordered the book, though I ordered the Kindle version, and so saved a bit on the price. But even the printed version price is quite reasonable ($20).

I'll let you know what I think, but suspect it's pretty good.

Thursday, March 8, 2012

Is performance attribution incomplete?

On a recent drive to and from a GIPS(R) (Global Investment Performance Standards) verification client, I began to listen to a book on Einstein and some of his friends, which included Kurt Gödel. I don't recall hearing of Gödel before, but do recollect his "incompleteness theorems."

According to Wikipedia, "The theorems, proven by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics." It further reports that "The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an 'effective procedure' (e.g., a computer program, but it could be any sort of algorithm) is capable of proving all truths about the relations of the natural numbers (arithmetic)."

Does this not hold for performance attribution? Recall that I recently touched on the question as to whether or not attribution answers the questions we wish it to. Perhaps even the best model will leave something out. Might Gödel's theorem hold here, too?

You know the saying, "a little knowledge is dangerous," and it definitely applies here, as I've only lightly scratched the surface of this topic, and would need to devote several hours to have any real understanding of it. But the brief statement above seems to hold some truth.

Perhaps this might be a good topic for Jose Menchero, PhD to address, given that his PhD is in physics, and he is no doubt familiar with Gödel. Interesting subject, I think.

By the way, Gödel is an interesting subject, himself.

Wednesday, March 7, 2012

What were they thinking?

Those who were around "at the creation" recall the debates regarding whether composite returns should be equal- or asset-weighted. Two groups in particular, the ICAA (Investment Council Association of America; now the IAA) and IMCA (Investment Management  Consultants' Association), lobbied AIMR (Association for Investment Management and Research; what is now the CFA Institute) for equal-weighting. I'll confess that at the time, I didn't pay this a whole lot of attention, and didn't formulate an opinion.

AIMR wanted the composite return to represent the experience of a "single account." That is, what the return would be if the composite was an account itself. IMCA and the ICAA felt that asset-weighting might influence some managers to favor larger accounts, knowing  that their returns would skew the results. And, I suspect that they also thought that equal-weighting made more sense as it shows the average return of actual accounts. But AIMR was steadfast ("resolute," in "W" speak) in their position, and refused to budge. IMCA was so determined that they created their own standard, which went into effect the same time the AIMR-PPS(R) did: it never caught on, however.

The AIMR-PPS did, of course, catch on, and motivated other countries to develop standards, which led to the creation of the Global Investment Performance Standards (GIPS(R)). And as with the AIMR-PPS, asset-weighting because the required way to derive composite returns.

But why? What is the benefit of the composite looking like an account, when it isn't one? The composite is comprised of one or more real accounts, that were managed individually; no one "managed" the composite. Would it not be better to see the average experience of real accounts?

When I conduct GIPS verifications I occassionally run across cases that SHOUT OUT to me that this is all wrong. Here's one recent example:

Because of the huge size difference, account A's return IS the composite's: account B doesn't even have to show up. What's the point of worrying about B? It has zero influence on the return. And yet, the manager's ACTUAL performance in this discipline lies between these two accounts: actually RIGHT IN THE MIDDLE of them (what mathematicians and statisticians call, the average)!

Okay, so the Standards recommend that firms show the equal-weighted composite return. Great! How many firms do? The number is approximately zero. And why not? Perhaps it's because they would prefer not to hand out their presentations on legal size (i.e., 8 1/2" x 14") paper, or resort to a 9 or 10 point font size to fit everything that's required on the page.

I know that this commentary is about as welcome to some as ants at a picnic. But seriously, what were they thinking when they advocated asset-weighting? NO ONE MANAGES COMPOSITES! Firms don't get paid TO MANAGE A COMPOSITE! Would it really be so bad to say, "okay, maybe equal-weighting makes more sense, so effective 1 January 2015, equal-weighting will be mandatory, asset-weighting is optional, and the change goes into effect on this date, but firms are encouraged to restate history"? And what's the likelihood of this occurring? Again, approximately zero. Oh, well.

p.s., Yes, the figures in the table come from a client, though they've been altered slightly, out of respect for our client's confidentiality.

Tuesday, March 6, 2012

Did the WSJ jinx the DJIA?

In today's WSJ, on page C1 there's an article titled "You Hear That? It's Quiet...Too Quiet," that mentions that it's been 45 trading days without a 100-point decline in the Dow, which is apparently the longest stretch since 2006.

And what happens?  Well, as of this post the market is down 170 points.

And so, we know who to blame!

Overlays and GIPS

Many firms avail themselves of currency overlays, as well as other overlay strategies, which often involve forwards or other derivatives, where there actually are no assets technically "under management." Does this mean that these firms cannot claim compliance with the (Global Investment Performance Standards (GIPS(R))?  Well, let's consider this for a moment.

If you look on pages 17-18 of my comment letter to the GIPS Exposure Draft you'll find the following:

Overlays: The standards don’t speak to overlays. Overlays deal with exposure, not real market values. Many overlay managers believe they can’t comply with GIPS because of this difference. I would like to see the standards speak specifically to overlays and have exposure take the place of market value.

The GIPS Executive Committee did not take me up on my suggestion, but this is probably understandable, given that this would mean introducing brand new material into the Standards, which arguably would have warranted yet another round of comments. Hopefully GIPS 2015 will reference this topic. But, that leaves open the question, "what to do in the meantime?"

It is my position that for overlay managers, the "exposure" is equivalent to "market value," and that these firms therefore should be able to claim compliance. There is only one Q&A on the GIPS website that speaks to the topic of "overlays," and it does not rule out the ability for a firm to include overlays, and so I believe this gives credibility to my position. 

I recommend that overlay managers who wish to comply use their "exposure" in an equivalent way to "assets under management," because this is essentially it is. They should include appropriate disclosures explaining what the information means. I think this is reasonable and in the spirit of the Standards. Have a different view? Let us know.

Monday, March 5, 2012

Do the numbers truly represent what they're supposed to?

Last week I had a post about holdings-based attribution, where I laid the groundwork for future commentary on analysis I've been doing on this subject, where I look at the results using both holdings and transaction-based methods. Well, it resulted in comments from my friend and colleague Andre Mirabelli, who questioned the fundamental model's ability to properly evaluate the attributes that produces the excess return. While not wishing to debate him on this topic, per se, it does bring up a broader question that is worthy of consideration.

When portfolio managers, prospects, and clients look at the numbers on a performance report, they draw various conclusions; and the folks who produced the reports no doubt hope that these conclusions are consistent with what they hope would be drawn. However, is everything working as it is intended? Just because a computer has run a particular model, which causes numbers to be produced, which then are assembled in a nice format on a piece of paper or a computer screen, does it mean that everything is correct?

The following graphic will be the basis for what we'll discuss today:















The center set of figures  represent the process that we typically employ: we gather data from a variety of sources; this data is fed into a model, or a series of formulas, and out comes the information, which is presented in reports or on computer screens, iPads, smartphones, etc.

The data issues are decades old: the acronym GIGO still lives on (garbage in, garbage out). One must take strides to ensure that the data is accurate and, of course, appropriate.

The real issue which Andre addressed is the model appropriateness. This is a fundamental issue which doesn't get enough attention. Although I've become less and less a fan of Warren Buffet, I will nevertheless quote him here: "beware of geeks bearing models." And yes, one must be cautious about what models they employ. Do we understand how they work? Do we understand what assumptions they make? What are the results intended to convey?

We recently completed our attribution survey, where we address a variety of issues on this important topic. It has amazed me how over the years, we've seen a shift from folks using the Brinson-Hood-Beebower model to the Brinson-Fachler model. BHB was published a year after BF and in fact is preferred by Gary Brinson. I believe we deserve much of the credit for identifying and communicating the huge difference between the models (through our training classes and various articles, not to mention our books). The late Damien Laker challenged me on this, writing articles which he posted on the Internet, that said that there was no difference; but there is a HUGE difference. If you're not already familiar with it, I'll briefly state that it's how the allocation effect is derived.

Well, folks for years used the BHB model with complete satisfaction; but did they really understand how it worked? Did they understand that there was an alternative, which they might prefer? In most cases the answers are "no." Years ago, when I was first composing my attribution book (which is long overdue for a rewrite), I did a fair amount of research and discovered that many folks simply said "we use the Brinson model." "THE" Brinson model. "The" means that there's only one, as in "THE" president of the United States. It should have been "A" Brinson model, as there are two. But, many developers weren't even aware of this. While the BHB model was published in the Financial Analysts Journal, which meant is was available to tens of thousands of individuals, the BF was published in The Journal of Portfolio Management, which has a much smaller subscriber base, and consequently, less opportunity to be read by the masses.

But even the employment of these models should call into question their appropriateness, given the basic rule that models should align with the investment process. Should a quantitative manager use a Brinson model, that only looks at allocation and selection effects (and, for the more enlightened, the interaction of these effects)? Most likely, no; instead, a multifactor model that looks at the factors they employ in their investment process would make more sense.

I (and many of my esteemed colleagues, such as Stefan Illmer and Steve Campisi) have been on our respective soap boxes for the past few years championing the merits of money-weighting; again, a hugely fundamental issue that is too rarely considered in model development and report production. I often challenge individuals who want me to review reports as to what they're trying to convey. What questions are they trying to answer? If you use the wrong formula, you're producing the wrong result, which can be misleading and not meet your reporting objectives.

This topic isn't a simple one, and covering it briefly in a blog post is impossible (as is obvious from today's attempt, which is only just scratching the surface). Perhaps I'll address it further in this month's newsletter.

By the way, the BF and BHB articles can both be found in Classics in Investment Performance Measurement.

Thursday, March 1, 2012

BREAKING NEWS!!! GIPS help has gotten easier!

The Spaulding Group, Inc. has just announced the creation of a new website service, that provides answers to GIPS(R) (Global Investment Performance Standards) related questions.

GIPSHelp.com.

"We are often contacted by clients and colleagues with questions dealing with the Global Investment Performance Standards," said Christopher Spaulding, a Senior Vice President at The Spaulding Group. "We felt GIPSHelp.com would be a tremendous resource for compliant firms and firms looking to become compliant. In addition to serving as a valuable time saving compliance resource for our industry, there will also be a private, members-only section dedicated to The Spaulding Group's verification clients."

With the tremendous growth in our verification practice (both GIPS and non-GIPS), we see many situations on a regular basis that require interpretation. And, we regularly receive questions from clients and colleagues. It just seemed to us that this would be a beneficial service for the industry.

And, it's free! All you need to do is register to use it.

We look forward to your feedback.

What's wrong with holdings-based attribution?

The Spaulding Group recently completed its most recent survey, which deals with performance attribution. Jed Schneider, CIPM, FRM discussed many of the results at a luncheon held in NYC, where we learned that, as with the prior three editions of this research, most folks prefer transaction-based attribution, though roughly half use holdings-based. The probable reasons as to why the contradiction are interesting, but not part of this post.

For several years, I have attempted to encourage a proper and unbiased evaluation of the two methods, to determine the true differences, and whether there is a point when one cannot justify using the holdings-based approach; a point where the use of transaction-based attribution is a "must." But, no one chose to do the research, so I did. And I discussed some of my preliminary findings at last year's PMAR (Performance Measurement, Attribution & Risk) conferences, and will discuss further findings at this year's events.

Today, I merely want to discuss the three problems with the holdings-based approach. And, in "David Letterman" style, I will do it in reverse order of significance.

Number 3: we will usually have a residual. For many readers, this will seem odd to have been placed third, because surely the residual is the main problem with holdings-based, but I'd say it is not. A "residual" is a non-zero amount which reflects the inability to fully reconcile to the excess return. Recall that with relative attribution, our goal is to completely account for the excess return. However, with holdings-based models we often (actually, more like usually) can't do this, but have a "residual," meaning we don't fully account for the excess return. In reality, it's worse than that.

Number 2: getting the proportions wrong. Here I mean that the amount that is assigned to the different effects may, in realty, be incorrect. We may have too much or too little assigned to allocation, for example. Thus, it's contribution to the excess return is over or understated. This goes beyond merely not reconciling to the excess return; rather, the numbers that are produced can be allocated in a manner which doesn't properly align with reality; their proportions are incorrect.

Number 1: having the wrong signs. To me, this is the major problem. What do I mean? Well, with holdings based attribution, we may be showing a positive selection effect, but in reality, it's negative! And so, instead of saying "great job!" we should be saying "you've got to do better!" And the problem is, you won't know this. You'll see a positive selection effect and conclude that those decisions were good ones that contributed to the excess return, when in reality the decisions hurt your performance!

This third problem (#1, actually) means that the results will be misleading,  spurious, invalid. What's the risk of this happening? I think you'll be surprised.

An article will be forthcoming which will provide additional details on my research. Suffice it to say, the results are fairly startling, and should encourage most users of holdings-based models to seriously consider switching.

p.s., for more details on the attribution survey, contact Patrick Fowler.

p.p.s., our next survey deals with the GIPS(R) Standards (Global Investment Performance Standards). Please join in! It will begin this summer.