Saturday, April 28, 2012

Anonymous need not apply

Just a reminder: If you wish to submit a comment, do not do so anonymously. Identify yourself.

And to clarify, anonymously means "Anonymous" in any form, as well as fictitious names. If you have a view or opinion, feel free to share it, but true identity is necessary.

Don't be afraid to disagree with something I've written: I welcome your thoughts, comments, disagreements, insights, ideas, etc. Thanks!

Friday, April 27, 2012

When compounding of returns doesn't make sense

For many, it only seems logical that when you deduct fees monthly or quarterly, that your annual net-of-fee and gross-of-fee return difference should equal the annual fee. How can it not?

It's quite simple to derive the monthly amount to deduct: just take your annual fee (e.g., 1.50%), add one, raise this value to the 1/12th power, and subtract one (in the case of a 1.50% annual fee we get 0.12414877 percent. And, we can validate that this is the right number by linking it for 12 months (I'll let you do the validation). Okay, so we know we can link the monthly fee and get our annual fee; so why, when we link our monthly returns net this fee don't we get the annual fee as the difference between annual gross and net?

Alone, the monthly fee, when linked, does yield the annual fee. However, when combined with a return, it will compound at a very different rate. Take a look at the following table to see how this can happen:

"But it doesn't make any sense!," you might explain. And yet, it does! Let's consider a situation where we begin with $100,000 and are able to have a consistent 2% return for each month of the year. Our annual fee is 1.50 percent. Let's calculate our gross-of-fee (GOF) and net-of-fee (NOF) returns, by (a) linking the monthly returns and (b) comparing our ending market value with our beginning.

What do we see? We get the same GOF (26.82%) and NOF (24.98%) returns. In addition, our difference is not 1.50 percent. Like it or not, this  is the way compounding works.

p.s., I thank my colleague, Jed Schneider, for suggesting that I demonstrate this using dollars.

Thursday, April 26, 2012

Attribution survey findings, in brief

Jed Schneider, CIPM, FRM is a guest blogger, offering some of his impressions from The Spaulding Group's recent performance attribution survey.


Every year, The Spaulding Group puts out a survey to the investment industry focusing on a performance related topic. Last summer, our survey focused on attribution. This was our fourth survey on attribution; it has also been our annual survey topic in 2002, 2004 and 2007.

Last year was the first year we put any of our surveys on line and it made a great difference with respect to the number of participants. We had 103 responses to the last attribution survey in 2007. We doubled that in 2011 with 207 participants to our survey!! We were pleased to have so much participation as a larger sample size provides a better reflection of how the industry uses and calculates attribution. We had participation globally; with one-third of responses coming outside of North America. While investment advisors were the most common type of firm responding, we had participation from mutual funds, banks and trust companies, insurance companies, custodians, software vendors and consultants. Our survey focused primarily on equity and fixed income attribution, the methods used, and the reporting audience and frequency.

Although the participant sized doubled, some of the results did not change, at least not significantly. For instance, the percentage of firms calculating both equity (89%) and fixed income (63%) attribution remained around the same as did the percentage that calculate hedge fund attribution (44%).

There were some results that did stand out, however.  For example, while portfolio managers and clients are the most common recipients of attribution reports (both equity and fixed income), investment consultants and marketing teams are also receiving them more now than in the past. We saw this jump in 2007 and it has continued to rise. Two-thirds of firms responding say they prepare equity attribution reports to investment consultants and 61% state their marketing group receives them. In 2004, both of these percentages were in the 40's!

We were glad to see that the percentage of firms using an equity model for fixed income attribution has dropped from 2007. About 41% used an equity model to calculate fixed income attribution in 2007; that number dropped to 18% in 2011.

For those who calculate attribution using the arithmetic approach (more common than geometric, based on our survey), the Cariño logarithmic method was the most frequently used linking method. We saw something interesting in the responses to this question overall. Every method increased from 2007 to 2011 except for the "don’t know" response. We can only surmise that more people who responded this time around knew how smoothing was being performed.

The full survey is available for purchase on our web site:

Our next survey, coming out this summer, will be on the GIPS(R) standards, and this, as well, will be on line.

Tuesday, April 24, 2012

WHAT are you looking for in your performance measurement software search?

I recently reviewed a client's software search materials. In our preliminary discussion they explained that they were looking for a performance attribution system. However, it became fairly clear during my review that they were actually looking for more, as they referenced the internal rate of return. I later confirmed that indeed they were also wanting to get their performance measurement and reporting addressed, and had assumed that their attribution system would be able to handle these requirements, too. And while this might be the case, to fail to completely flush out the needs of this area could result in a system selection that doesn't fully meet their requirements.

It is often the  case that when a firm begins a search, they focus on one particular area, when they are really looking for more. It is important to know what you're looking for, so that you perform the proper needs analysis, and your review is complete. And while it's true that the "big four" performance areas have some overlap:

each still typically provides less than a system devoted to that area would.

The Spaulding Group's research has found that, for example, most asset managers rely on their portfolio accounting system to handle their rate of return needs, but use specialized software for their other requirements. Yes, attribution systems calculate rates of return, but primarily in the context of attribution.

Performance systems today are analogous to the medical profession; let me explain. A century or so ago, if you were in need of medical attention (because you were pregnant, had a cold, or broke your leg), you went to one doctor, who took care of all your problems; there were no specialists. But, as we know, today it's very different: you go to a cardiologist for heart issues, a dermatologist for skin problems, a pulmonologist for lung issues, etc. Likewise, it used to be that your portfolio accounting system handled all of your performance measurement needs. But today, we find specialist vendors and products, that serve each area as a distinct function, with its own set of needs. In fact, attribution should be broken into at least two areas: fixed income and equity, as some vendors or systems only serve one of these areas. Likewise, other functional areas can be further subdivided.

By understanding what you're looking for, you'll improve the chances of finding it.

Monday, April 23, 2012

Are Performance Measurement Professionals Creative?

I am adjusting the title of a post that William McKibbin had last week, titled "Are Mathematicians Creative." Given that my undergrad degree (from Temple University) is in math, and that math is clearly a huge part of performance and risk measurement, I guess I see myself to some extent as being a "mathematician."

What does "creative" mean? My favorite online source for words offers the following:
  1. having the quality or power of creating
  2. resulting from originality of thought
Nothing surprising here. And so, I would say "yes" to McKibbin's question.

Now to my question, I would also say "yes." Folks like Jose Menchero, David Cariño, Bill Sharpe, etc. have definitely demonstrated creativity, have they not?

Saturday, April 21, 2012

Reminder: Free Webinar This Monday!

Reminder! This coming Monday, April 23rd at 11:00 am (EST), Jed Schneider, CIPM, FRM will discuss highlights from The Spaulding Group's recent performance attribution survey. He will present findings, and compare the results with previous years' surveys. You will also have the opportunity to submit questions.

If attribution is an important topic for you and your firm, you will not want to miss this special free webcast.

We thank our survey cosponsors, whose funding made this research effort possible: VPD, DST Global Solutions, BI-SAM, First Rate, StatPro, and Morningstar.

This is a free webinar, and only a few slots remain.

To register contact Patrick Fowler (732-873-5700).

Friday, April 20, 2012

When NOT to use "N/A"

In doing GIPS(R) (Global Investment Performance Standards) verifications under the new version, I've witnessed a few firms who insert "N/A" for years prior to 2011, when they chose not to report the 3-year annualized standard deviation. Not showing this figure is fine; "N/A" is, I believe, misleading.

N/A can mean:
  • Not Applicable: but it CAN apply if there are at least 36 months of returns to run the statistic against
  • Not Available: but it CAN be available; you just need to run the math.
You should just leave it blank! Just as with the example in the standards:

Thursday, April 19, 2012

GIPS Guide Almost Here!

Several years ago, The Spaulding Group published the first guide on the presentation standards. At that time we still had the AIMR-PPS(R), and GIPS(R) was just getting started. Well, the book sold out pretty quickly, and we had plans to revise it, but hadn't made much progress, until last year, when we committed to get the job done!

I wrote the earlier version; this time I was joined by my colleagues, John Simpson and Jed Schneider. Douglas Spaulding, who is the editor of The Journal of Performance Measurement(R), was charged with getting the draft edited and laid out; something he (with help from our proof reader, Mary Meagher, and production assistant, Jessica Laffey) has done in record time! We plan to have the materials to the printer within the next week or so, with an expected book delivery back to us by mid May.

We are quite excited about the book, as it is a HUGE expansion on the prior version, and is part of our GIPS Orientation Kit™. The book lists for only $75, and we're having a "pre-release sale" at just $45 (i.e., a $30 savings!). If you're interested, please place your order by the April 30.

I also want to ackowledge and thank the sponsors for this book project:

Wednesday, April 18, 2012

Don't forget about the benchmark

One thing I will discuss at this year's Spaulding Group PMAR (Performance Measurement, Attribution & Risk) conferences is the impact of benchmark changes to the attribution residual. While we recognize that the holdings-based model suffers from residuals when trades occur, one can easily overlook the contribution that can arise from benchmark turnover.

While doing my research, I initially set out to avoid any months that had any turnover in the index (I'm using, yes you guessed it!, the S&P 500). But finding months like this can be a challenge, and so I decided to ignore this rule. I had temporarily forgotten the basis for my initial plan (to not have to worry about the turnover), but was quickly reminded when I saw a residual with both the holdings and transaction-based approaches, when there was no activity in the period. How could this occur? Only one answer: turnover in the benchmark.

This is an important topic to address with vendors, if you're engaged in an attribution software search, and are looking for a transaction-based model: make sure the system is sensitive to benchmark turnover! Otherwise, you'll occasionally see a residual appear.

Tuesday, April 17, 2012

Concerns with holdings-based attribution

On my "to do list" is the task to write an article an article for The Journal of Performance Measurement(R), detailing some of my findings from research I'm doing regarding the impact of trading on the accuracy of the holdings-based approach to performance attribution. Recall that firms can either (a) use a "buy and hold" approach, that only uses the starting position weights, ignoring any intraperiod activity (holdings-based) or (b) begin with the starting position weights, and then adjust them for trades, income, corporate actions that occur across the period (transaction-based).

We have known that the use of holdings-based models can result in a "residual." Let's briefly speak about this term. A "residual" is a non-zero difference between the sum of the attribution effects and the associated excess return. It can occur in two ways: across periods, as the result of linking arithmetic attribution subperiod effects (geometric attribution doesn't have these residuals) or within a period, by the use of a holdings-based model.

And so, we recognize that there's a flaw in using just the starting holdings and ignoring the activity. But, as with the "across periods" approach, there is often the assumption that the error is proportionate to the results, meaning that one could smooth the residual across the effects without encountering much of an error. My research has shown that this isn't the case. In fact, there's a second, more significant problem: the misassignment of effects. That is, we can have, for example, the allocation effect reflecting a totally incorrect value (e.g., negative when it should be positive).

My research will continue for the next several months, and I hope to have something that speaks to this in much greater depth later this year. In the mean time, I will provide an update on my most recent findings at this year's Spaulding Group PMAR conferences.

Friday, April 13, 2012

ATTENTION! Performance attribution survey results to be shared in a FREE webinar!

On Monday, April 23rd at 11:00 am (EST), Jed Schneider, CIPM, FRM will discuss highlights from The Spaulding Group's recent performance attribution survey. Jed will present findings, and compare the results with previous years' surveys. You will have the opportunity to submit questions. If attribution is an important topic for you, you will not want to miss this special free webcast. 

We thank our survey cosponsors, whose funding made this research effort possible: VPD, DST Global Solutions, BI-SAM, First Rate, StatPro, and Morningstar.  
Don't delay; email Patrick Fowler today to reserve your space. Only 100 free spaces are available (and they're going, fast!)

Thursday, April 12, 2012

Single vs. Joint Evaluations

In Thinking Fast & Slow, Daniel Kahneman discusses the notion of evaluating items separately (single) versus in comparison with others (joint). You are no doubt familiar with the importance of having returns and risk measures of a portfolio, for example, shown along with similar statistics for a benchmark, in order to gain greater insight into what occurred. To learn that Manager A's performance in 2011 was 4.58% means nothing in isolation; it is only when we have something to compare it to are we able to judge whether this is a good or not so good result.

This section of Kahneman's book reminded me of the difficulty presented with GIPS(R) (Global Investment Performance Standards) composite returns, as they are currently derived. Today, only asset-weighting is required; and although equal-weighting is recommended, it is rare to see it shown. But if one really understands what the details are that comprise a manager's results, might they opt to see the other metric?

For example, if showing a prospect your composite, and its return for 2011 is 4.58% vs. the benchmark of 4.18%, you have demonstrated, at least for last year, superior skill. But if it turned out that the composite has five accounts, one huge mutual fund which had a return last year of 4.59%, and three smaller separate accounts, whose returns fell below 4.18%, but because of the fund's size, the composite return was skewed, might these facts prove helpful? If the equal-weighted average is below the benchmark, we draw a completely different conclusion, do we not?

As I have stated before, I have become a non-fan of asset-weighted returns, and don't see any value in them. Equal-weighting should reign; but, I am perfectly content with seeing both required. And why not? Might the mere insights provided by such information be worth the additional column?

Wednesday, April 11, 2012

Celebrating the 10th with gusto!

Almost 30 years ago, when my wife and I celebrated our 10th wedding anniversary, we marked the occasion with special gifts for one another (she gave me a gold watch, which I still use (I'd better!)). For our 25th, we went to Disney World (her choice) and our 35th saw us back in Hawaii (we lived there for 39 months when we were first married (I was in the Army at the time)). And so, we are guilty (as most people are) of doing something significant for anniversaries that end in a "5" or "0."

The Spaulding Group's performance industry leading conference, PMAR (Performance Measurement, Attribution & Risk) will have its 10th anniversary this May. And we have some surprises for our attendees, which we hope will make it an extra special event. We're expecting to have record attendance. If you haven't signed up, yet, do so soon! And, if London is more to your liking, then please plan to join us at PMAR Europe III in June.

Tuesday, April 10, 2012

Why do we refuse to see the flaws in our beliefs?

Yesterday's Wall Street Journal had an interview with Nobel Prize winner and author Daniel Kahneman, regarding his new book, Thinking Fast and Slow ("So Much for Snap Decisions," by Diane Cole). I am still making my way through the book, and continue to find much of what he shares both interesting and relevant.

He discusses Swiss scientist Daniel Bernoulli' contributions to utility theory, and how it has a major flaw: it lacks a reference point. For example, when asking someone to make a decision involving possible gains or losses (e.g., whether one would prefer $100 or a 50/50 chance of winning $200), it ignores where the person stands at that point: what their current wealth level is.

After offering some examples of why this oversight is quite relevant, he writes "All this is rather obvious, isn’t it? One could easily imagine Bernoulli himself constructing similar examples and developing a more complex theory to accommodate them; for some reason, he did not. One could also imagine colleagues of his time disagreeing with him, or later scholars objecting as they read his essay; for some reason, they did not either. The mystery is how a conception of the utility of outcomes that is vulnerable to such obvious counterexamples survived for so long."

Kahneman further acknowledges that "I can explain it only by a weakness of the scholarly mind that I have often observed in myself." He names this condition "theory-induced blindness," which means that "once you have accepted a theory and used it as a tool in your thinking, it is extraordinarily difficult to notice its flaws. If you come upon an observation that does not seem to fit the model, you assume that there must be a perfectly good explanation that you are somehow missing. You give the theory the benefit of the doubt, trusting the community of experts who have accepted it."

"Many scholars have surely thought at one time or another of stories ... [that] did not jibe with utility theory. But they did not pursue the idea to the point of saying, 'This theory is seriously wrong because it ignores the fact that utility depends on the history of one’s wealth, not only on present wealth.' As the psychologist Daniel Gilbert observed, disbelieving is hard work."

When I read this part of the book it resonated so much with me; the notion of "theory-induced blindness," and of the various things that we do in our part of the industry, which we accept as gospel, when they have serious flaws.

I have shared some of them here, as well as in The Spaulding Group's monthly newsletter. For example, the fact that the Global Investment Performance Standards (GIPS(R)) continues to require asset-weighted returns, and that some actually champion the use of the obviously and seriously flawed aggregate method. The fact that time-weighting is seen by too many as the only way to derive portfolio rates of return.

I think it's something like the story of the economics professor and the student, who were walking together on campus. The student noticed what he thought was a $20 bill on the ground, but the professor said that it couldn't be a $20 bill, because if it was, someone would have picked it up. Later, the student went back, retrieved it, and bought some beer.

If these methods were flawed, surely someone would have done something about it long ago, right? Those who made these decisions were bright and admired people, and they had to have sound reasons for what they did, right?

Isn't it time we stopped ignoring the flaws, and take the blinders off? We should recognize when we are victims of theory-induced blindness, shouldn't we?

p.s., It occurred to me that an example might help. While driving my wife to her office this morning (we're going to see Mana in concert at Madison Square Garden tonight!), I spoke to her about this post, and used this example, taken from the book.

Mary and Bob both have $2 million. And so, from utility theory they should feel equally happy. However, if you learn that last week Mary had $1 million (meaning her wealth doubled) and Bob had $4 million (meaning it dropped by one half), would you think they feel the same? The absence of a reference point weakens utility theory's assessment.

Wednesday, April 4, 2012

A word about discretion

One of the problems with our industry is that we often use the same word to mean multiple things (e.g., "alpha" can mean excess return (portfolio return minus benchmark return) and Jensen's alpha, which takes beta into consideration), and multiple words (e.g., excess return, active return) to mean the same thing.

The word "discretion" serves two different roles in investing and performance measurement:
  1. It describes a client relationship, whereby the client has granted the manager (or firm) the authority to trade on their behalf
  2. For the GIPS(R) standards (Global Investment Performance Standards), it is used to indicate cases where the client has not imposed restrictions such that the account would not be representative of the manager's strategy (in this context, a nondiscretionary account is one where the client has restrictions that cause the portfolio to not be representative of the firm's strategy). In the expression "all actual, fee paying, discretionary accounts must be included in at least one composite," we're speaking of THIS form of the word.
When I teach classes on the GIPS standards or meet with verification clients, this is often an area that they find confusing, since they are used to using the word solely in the context of the first definition; the second one is a new one to them. In reviewing firm's policies and procedures, it is quite common to find their wording relative to composite inclusion addressing the legal aspects of the relationship, only. So, what's the solution?

For a while I've advocated qualifying the term when it's used in terms of the GIPS standards, since WHENEVER the standards use it, it means the second definition noted above. And so we'd see "GIPS discretion." However, I've come upon an even better solution!

Use the word "unencumbered."

To "encumber means to "impede or hinder." Isn't that what we mean when we say "nondiscretionary" in "GIPS speak"? And so, "unencumbered" would indicate cases where there is no impeding or hindering the firm's management. GIPS would then replace the wording to be "all actual, fee paying, unencumbered accounts must be included in at least one composite."

By adopting this change, we'd eliminate one of those huge confusing aspects of the Standards. Just a thought.

Tuesday, April 3, 2012

Math Mystery

In the March 27, 2012 issue of the WSJ, Dan Fitzpatrick and Victoria McGrane discussed how large banks who released "stress test" results are being questioned by the Federal Reserve on the approaches they took to derive their results ("Banks Stress Over Fed Test Methods"). It seems that there can be multiple ways to calculate the results, which can then be materially different.

In a recent software verification assignment, I discovered that our client employed an unusual way to calculate some of their statistics, including the Sharpe and information ratios. In both cases, they annualized the numerators and denominators, and then did the calculations. For example:

As you might expect, we get different results. And while the differences are often not material, who's to say that they won't be?

Question: is there anything wrong with the way our client does their math?
Answer: No! It's 'non-traditional," but there is no prohibition for a firm to calculate a risk statistic differently, provided they disclose their method.

Even what I show as the "traditional" way isn't what Nobel Laureate Bill Sharpe advocates today. In a 1994 Journal of Portfolio Management article (aptly named, "The Sharpe Ratio"), Sharpe altered his earlier formula, so that the denominator isn't the standard deviation of the portfolio returns, but rather the standard deviation of the equity risk premium (which we find in the numerator; i.e., the differences between the portfolio return and risk free rate). In spite of this revision, it appears that most firms still use the formula that Sharpe introduced in his 1966 Journal of Business article ("Mutual Fund Performance").

Certain formulas are sacrosanct, and shouldn't be altered (though there may be some inherent options available within them, which need to be specified), such as standard deviation, Modified Dietz, and the IRR. But many of the risk measures have been implemented in different fashions, which can make cross-comparisons a challenge. Thus the need to document how you do the math.

Monday, April 2, 2012

Three words of advice if you're planning a software search

I recently reviewed a client's plans and documentation for their software search, and came away with three observations which might prove helpful for you, too, should you be planning a search any time soon.
  1. Be realistic about your timing. In a recent blog post I wrote about the "planning fallacy," and how it's quite easy to be overly optimistic in setting schedules. Yes, you'd like your new system implemented quickly, but the process is a time consuming one. And it's best to allow for adequate time for proper due diligence. Rushing to meet an unrealistic schedule can lead to the selection of the wrong system.
  2. Get your scope set properly. This particular client is looking for a performance attribution system; however, their requirements clearly indicated that there were needs that went beyond attribution. Often we have clients who say they want one system, when in reality they're looking for more. Clarify what your needs are, so that you select the right type(s) of system(s).
  3. Do a thorough needs analysis. Firms often fail to do a complete needs analysis, overlooking important requirements. I recall one client's who documentation once was a small fraction of what it ended up becoming (prior to our involvement), because of the lack of a thorough and complete review. Make sure all of the critical areas and key contacts are included in this analysis.
Software searches are time consuming, and require skills that many firms don't have in house. It also means taking key staff away from their primary jobs, which can have unattractive consequences. This is one reason firms like The Spaulding Group are often asked to participate. While there is no requirement to include a consultant, you may find that even a minimal amount of support from an industry specialist can help you ensure a smooth and successful search.