Tuesday, June 30, 2009

Madoff or GIPS 2010, Madoff or GIPS 2010, ...

Okay, I'll comment on both:

Madoff - 150 years is the right sentence. The max is what this man deserves given what he did to many faithful and trusting friends and investors. Even his wife and sons have (rightly) turned their backs on this man. It's difficult to fathom how someone could have done what he did, but then again there are many crimes for which there is simply no answer. Man's inhumanity to man continues.

GIPS 2010 - okay, there are only a few hours more to get your comments in. Think enough folks have spoken? Not really ... unless you're one of them. Don't want to comment on everything? Fine, just comment on the item(s) you feel most passionate about. The GIPS EC wants to hear from you. In looking at the list of commenters (is that a word?) there are many friends and associates who are missing. So please, let the EC know what you think!

Performance measurement and the 90 percentile man

When I was in grad school in the mid '70s I had a few courses that dealt with ergonomics (the man-machine interface or connection). We learned how much of what we see and use, from chairs, to doors, to bathtubs, and even toilets, are designed for the "90 percentile man." That is, to accommodate the dimensions (height, width, weight, etc.) that fits within a 90 percent range of sizes. This is why some of the Seton Hall University basketball players, who happened to be on the same flight with me not long ago, could barely squeeze into their seats (I gave up my bulkhead seat to allow at least one to have some comfort, as I'm a shrimp at 6', but at least fit into the design range). To attempt to cover 100% of the population would be virtually impossible (and extremely costly), so designers aim for 90 percent.

It occurred to me recently that much of what we do in performance measurement is also geared to the 90 percentile man, or more correctly, organization. Perhaps not consciously, but the effect is the same, nevertheless. GIPS(R), for example, doesn't address every facet of investing. And while it could, it would expand the standards considerably and require a great deal more work. This makes it difficult, at times, for those firms (like the 7' basketball players) to properly fit in with what we do have.

Take overlays, for example. The standards are virtually silent on this topic. And even in publishing we rarely see much written on this subject. And yet it is a fascinating one that many firms are involved in. Could it be that it's outside the 90 percentile range? If it isn't, it's hugging the boundary.

I would argue that software vendors, too, aim for the 90 percentile firm. If they invest their time and effort to satisfy 100% of the market, they'd find few takers for some of the features they'd deliver and possibly not even recoup their investment.

When facing situations like this some firms (which are near or outside the 90% range) are sometimes frustrated to find little or no satisfaction or answers. This doesn't mean that the extremes can't be accommodated. Just as airlines provide seat belt extenders for those individuals who are a tad larger than what the standard seats are meant to handle, standards and software can be expanded, too. Sometimes it requires some effort on the part of the firm or individuals in the industry to champion their causes for them.

To stay within the 90 percentile range is generally a good business decision for software and designers and standards makers, although there are times when some expansion is warranted.

There's more to be said on the issue of overlays, so stay tuned!

Sunday, June 28, 2009

After after-tax

The GIPS EC (Executive Committee) has proposed to eliminate after-tax standards from GIPS(R). This is understandable given that (a) there are two country-specific versions (the U.S. and Italy), (b) GIPS is a global standard and there is a desire not to have any country-specific rules, and (c) the prospect of having a generic version of after-tax that would apply to any country seems like a monumental challenge. That being said, I voiced opposition to the elimination of these rules because exceptions should be made. I expect, however, that these rules will disappear.

And so, this raises a question: what happens when they're gone? Managers of taxable accounts should ideally provide after-tax results to clients and prospects. What are they to do when there are no rules? No guidance has yet to be offered.

My recommendation: firms that wish to provide after-tax returns to clients and prospects should include a brief description of how they were arrived at and offer to provide more details upon request. Ideally, U.S. firms should use the rules that exist today. However, they will most likely be free to adopt whatever approach they wish. They should be consistent in their application of such rules and, again, provide information on their approach. I expect the EC will publish guidance as we move forward. I'll keep you posted.

Friday, June 26, 2009

Dietz & the Modiglianis

I gave a talk on Wednesday for a software vendor on the topic risk-adjusted performance measures and began, as I often do, by mentioning Peter Dietz's statements regarding this subject in his 1966 thesis. He pointed out how risk and return should be linked, but offered a fairly naive approach, suggesting that when two portfolios have the same returns, it's easy to see which did better when comparing risk (the one with the lower risk); likewise, if they each have the same risk values, then by comparing the returns we can also draw conclusions as to who did better (in this case, the one with the higher return). However, the likelihood of encountering these situations is quite remote.

Oddly, perhaps, when the late Nobel prize winner Franco Modigliani and his granddaughter, Leah, developed their risk-adjusted performance measure (M-squared), they offered a general approach to determine the risk-adjusted value (which for them is measured in basis points, just like return): we equalize the portfolio's risk with the benchmark's. This trickery is accomplished through their model, which results in a corresponding adjustment to the portfolio's return (downward, when risk is lowered; upward, when risk is increased). The result is an adjusted portfolio return with the same risk as the benchmark, which allows easy (and intuitive) comparison.

Whether or not Franco and Leah were aware of Dietz's earlier statements is unknown, but it's interesting that what appeared to be rather naive is essentially what they've done! And, M-squared happens to be one of my favorite measures, which sadly hasn't been adopted by enough firms, yet.

(Note: I've written an article on this topic ("M-squared: A Double-take on Three Approaches to a Primary Risk Measure." The Journal of Performance Measurement. Summer 2007) in case you'd like to explore this matter further).

Wednesday, June 24, 2009

In all material respects

The GIPS(R) standards don't have any wiggle room. While there may be other rules or standards where materiality can come into play, it isn't the case with GIPS. Firms state that they comply in their presentation materials and don't qualify, waffle or hedge their statement by saying that they "comply in all material respects."

Some verifiers apparently allow their clients to use such wording in their "rep" letter. But why? Perhaps because the firm is uncomfortable making such a bold statement that they actually comply! Well, then why would they feel comfortable to make a claim of compliance in their presentation materials without such additional qualifying wording? Technically, a firm shouldn't engage a verifier unless they believe they are compliant.

If the firm makes (what I would consider) a weaker claim (i.e., by using "in all material respects") what happens if later on, after the verification, a determination is made that in reality the firm wasn't compliant? Is the verifier now "on the hook" for not catching something? The verifier might say "well, you did say in all material respects, so I understood that you might not actually comply" or might the firm challenge the verifier by saying "I didn't say I was completely compliant but expected you to discover those items where I had problems"? Or, perhaps the verifier should ask the client "in what non-material effects don't you comply?"

This level of qualifying a claim is somewhat new to me and it's unclear that such additional language is appropriate, warranted or a good idea. I'd love to hear the thoughts of others on this matter.

Tuesday, June 23, 2009

Webinar topics

We've been hosting monthly webinars for a while now, and will continue to do so. We've already identified the topics for July (Fixed Income Attribution), August (Risk adjusted performance), and September (the performance measurement professional). We're open to ideas for other topics, so if you'd like to suggest a topic or two, please let me know (DSpaulding@SpauldingGrp.com). Thanks!

The U.S. Open & Performance Measurement

I was at the U.S. Open on Sunday and enjoyed the event quite a bit. As we were returning home, I thought that there had to be something I could take away that relates to performance measurement. Well, it didn't take long to come up with it: the rules.

The U.S. Golf Association (USGA) is the governing body for golf in the United States and has established rules that cover just about everything imaginable, so as to not leave the decision as to how to respond up to the players. For example, if your ball goes into the water, lands out of bounds, gets plugged into the ground, gets dirt on it, or hits your opponents ball, there are rules as to what is to be done. Even some odd situations, such as hitting the ball two times on the same swing (which might seem impossible, but since it's happened to me I know that it does happen) are covered. These rules are highly prescriptive. And while we might disagree with some of the rules, if we play the game properly, we adhere to them.

When it comes to the rules for performance measurement (e.g., GIPS(R)), the decision was made to NOT make them overly prescriptive and to leave much open to interpretation. One reason for this is perhaps because it would be impossible to identify every possible event that might occur. In addition, since the standards are ethically based, there's a presumption that the individuals involved with them will attempt to act in a manner that they feel would be best for the industry.

And while some might wish the rules were a bit more prescriptive, the "balance" that's been struck is generally acknowledged as a reasonable approach.

Monday, June 22, 2009

You say toe-may-toe, I say toe-mah-toe

I just got a note from a colleague in response to this month's newsletter (http://www.spauldinggrp.com/images/stories/PDF/newsletters/jun09nl.pdf) suggesting that my use of the term "volatility" for standard deviation is incorrect, and that "variability" is more appropriate.

I'm not sure if this is a semantic issue or not. Bill Sharpe referred to standard deviation as variability in his 1966 paper, where his famed risk-adjusted-return measure was introduced: he called it "reward-to-variability," and his risk measure is standard deviation. He went further and referred to Jack Treynor's measure as "reward-to-volatility," where the risk measure is beta. Did he therefore make a judgment decision feeling that one term applies more to one measure than the other? Or, was he simply trying to find a way to distinguish between the two terms, and chose synonyms, one for one term and one for the other (without resorting to the eponymous approach which history has taken care of for us)?

I would argue that most individuals in our industry see standard deviation as a measure of volatility - to show how VOLATILE the market is, for example. Not to see how values vary from day to day. Is there really much of a difference? It's unclear to me.

Friday, June 19, 2009

Moneyball & performance measurement

As often happens when I read, I stumble upon quotes which I will want to employ in my speaking and writing. Here are just a few from Michael Lewis' Moneyball, along with commentary:

- The meetings, from their point of view, are all about minimizing risk (p. 27): Here, the author is speaking about draft meetings. And, the notion of minimizing risk is quite a standard aspect of our world, yes?

- Teach him perspective-that baseball matters but it doesn’t matter too much. Teach him that what matters isn’t whether I am strick out. What matters is that I behave impeccably when I compete. The guy believes in his talent. (53): Actually, this quote has nothing to do with our field, but is much broader...one of perspective...that we should avoid taking things too seriously. Being reminded of this isn't so bad, right?

- When we state it that way, it becomes, or should become, crystal clear that the most important isolated (one-dimensional) offensive statistic is the on-base percentage. (58): Here we learn of one of the key "ah has" of the analysis of baseball statistics: that the wrong ones are often being used. Should make us reflect on what we do and whether we're doing it right (such as the overuse of time-weighting and standard deviation).

- I didn’t care about the statistics in anything else. I didn’t, and don’t pay attention to statistics on the stock market, the weather, the crime rate, the gross national product, the circulation of magazines, the ebb and flow of literacy among football fans and how many people are going to starve to death before the year 2050 if I don’t start adopting them for $3.69 a month; just baseball. Now why is that? It is because baseball statistics, unlike the statistics in any other area, have acquired the powers of language. (Bill James, 1985; Baseball Abstract) (64): being focused isn't such a bad thing.

- What he writes may be good, but why he writes is something you particularly want to hear more about (64): Something I can relate to, from a personal standpoint. Can't say why it is, but I do enjoy writing.

- The statistics were not merely inadequate; they lied. (67): This one makes me think primarily of the use of standard deviation, which, given the non-normal state of returns, will result in inadequate and false information.

- The meaning of these performance depended on the clarity of the statistics that measured them (68)

I'll share more in the future. But, for now I hope that I'm motivating you to pick up a copy.

Thursday, June 18, 2009

Effective reporting and perspective

One area that often comes up in discussions is client reporting. There are no standards on this topic, although some guidance has been offered.

Reporting is made complex for a few reasons. First, at times the client dictates what they want, in which case the manager's job is made a bit "easier" because they don't have to design the reports, they just make sure they report what the client has asked for ("easier" is obviously used cautiously here, as such reporting can be quite complex, especially when supporting many firms with many unique needs).

As for those clients that don't specify their requirements, firms often wish to tailor reports to each type of client. For example, many retail investors would have no interest in seeing attribution reports, as they would only be confused by them.

We have been involved in a few report design projects and advocate heightened sensitivity to the way information is provided, as there are various perspectives from which the information can be viewed.

We met with a custodian some time ago, for example, who discussed their reporting facility. Their reports, however, are from a single perspective: the manager's. That is, they tell their clients (e.g., pension funds) how the fund's managers have done. This is quite typical. Their returns are time-weighted and the information shown on a portfolio-by-portfolio basis. But what about reporting from the client's perspective? That is, to tell the client how they are doing? You no doubt are aware that one can have a positive time-weighted return but lose money. Clients want to know if they are doing well or not, which is where money-weighting comes in.

If the reporting is coming from a consultant, then an added twist might be to show how the consultant has done in their recommendations (allocations and manager selections).

Perspective is too often overlooked when developing reports. Since there can be multiple forces impacting the ultimate investing (custodian, client, manager, fund-of-fund manager), the reports should be sensitive to what questions they're answering and choosing the best way to present such answers.

Wednesday, June 17, 2009

In memorium


I just learned that Peter Bernstein passed away earlier this month. As you probably know, Peter was very well regarded in our industry. Not only was he a highly successful money manager, he was an author and long-term editor of the Journal of Portfolio Management. I read his book on risk several years ago and had the opportunity to hear him speak at an NYSSA event, where I got him to sign my copy (I read other books he wrote, but this was the only one I got signed by him). Peter never retired, which arguably is a great example as I happen to believe that God didn't put us here to sit around on our butt but rather to contribute, and contribute Peter did.

I'm working on a paper for a class I'm taking and one of the articles I'm referencing Peter wrote with Rob Arnott. Peter was an excellent writer. And a great speaker. He lived a full life (he was 90) and will no doubt be sorely missed by his wife, his family, his friends, and our industry. Our industry is definitely much better off to have had the likes of Peter grace us for so many productive and rewarding years.

Independence and why it's important

The GIPS (r) standards include a provision that deals exclusively with the notion of "verifier independence." The standards recognize that in order to do an effective, independent, and objective verification, the firm must be independent. But "independence," at times, seems a bit like "pornography," from a Potter Stewart perspective (to paraphrase the former U.S. Supreme Court jurist's opinion: I may not be able to define it, but I know it when I see it).

In reading Michael Lewis' Moneyball I came across an example of a violation of independence: it occurred when the Major League Baseball Commissioner (and, at the time, owner of the Milwaukee Brewers) wanted to demonstrate how "rich teams" made it impossible for "poor teams" (such as, coincidentally, the Brewers) to effectively compete. (The fact that the Oakland As (a poor team, of which the book is about) managed to do so wasn't of much interest to the commish).

The GIPS guidance statement on independence attempts to clarify this topic, but sometimes not to the degree we might hope for. We recently ran into a couple cases which, to us, seemed to cross the line. The first has to do with verifiers providing "templates" to their clients: templates for policies and procedures. As one colleague put it, it's a "very slippery slope." Essentially, at what point will the verifier be verifying their own work? (Verifying ones own work, as you might guess, is prohibited and a sign of a conflict). While this service seems to be one that many firms that wish to be GIPS compliant would find highly desirable, it is problematic. We understand that many verifiers offer such a service but we are concerned with its impact on independence and objectivity. (In our verification practice we don't provide templates but, to be completely truthful, have considered doing so because of the competitive disadvantage we find ourselves at at times. If we do provide them, they will have to be basic enough to avoid such an independence conflict, meaning they might not have much value. This slippery slope is one that causes us concern).

The second deals with verification firms that provide software to their clients that assist the client in placing accounts into composites. This, to us, is even more egregious. And while (again) such a service might be desirable on the part of the client, how can the verifier truly claim independence when the firm used software from the verifier to assist them? To offer such a "value added service" has appeal, which may in fact put the verifier at a strategic advantage over competitors that don't provide such software; but it smacks of a violation which should cause both parties to avoid such a practice.

Independence is to be determined jointly by the verifier and their client (i.e., without any oversight). Both should remember that the spirit of the standards should be kept in mind and that this is an ethical issue; they should try to be as objective as possible when considering these matters.

Tuesday, June 16, 2009

Mea culpa penalty

I just learned that David Letterman made, what has been come to thought of as, an offensive joke a week ago, directed at Alaska Governor Sarah Palin's daughters. Since I don't watch late night talk shows, I was unaware of this event until this morning. Letterman recognized his error and offered an apology to Gov. Palin and her family (http://www.foxnews.com/story/0,2933,526525,00.html).

This is a bit of a coincidence since I have just come to realize that I may have offended some of the attendees at last month's Performance Measurement, Attribution & Risk (PMAR) conference when I referred to Canada as the "51st state." This sarcastic remark was more of an acknowledgement of ignorant Americans (not that any were in attendance) who unfortunately don't (a) know geography or (b) appreciate our friend and neighbor to the north. As with Letterman's efforts, mine was intended to be funny, not offensive. I respect our long-term ally and largest trading partner a great deal, and in no way would ever knowingly wish to offend any of its citizens. And therefore I will send out e-mails later today to the several Canadians who attended our event, offering an apology.

Well, all of this, in turn, made me think of the change which goes into effect this coming January, of which I recently spoke, which mandates that firms that make corrections to GIPS(r) presentations because of material errors must confess their sins for an entire year on their presentations, even though they would have been expected to communicate their error to any party who had previously received the erroneous material (talk about a run-on sentence!). (This requirement is analogous to saying "I know that you probably didn't know this, but during the past year I made an error, and even though it didn't affect you and it's been corrected, I wanted to let you know that I made a mistake).

Imagine if the same rule applied to talk show hosts and conference speakers, in which case at the beginning of every show for a year Letterman would have to apologize for his errors. And I guess I'd have to seek a mea culpa at least every time I met a Canadian. A bit overboard, yes?

Monday, June 15, 2009

January 2010...are you ready?

If you have a copy of the current version of the GIPS(r) standards, you can fairly easily identify most of the planned changes for this coming January. They include the requirement to revalue portfolios whenever large cash flows occur and the need to manage cash separately for any carved-out portion of a portfolio. What you won't find is the requirement to disclose any material changes to a presentation for a minimum of one year.

This requirement was introduced in the revised Error & Correction Guidance statement, which was approved by the GIPS Executive Committee last year. I have commented on this in our newsletter (http://www.spauldinggrp.com/services/resource-center/91-newsletters-pamphlets-a-white-papers.html), but wanted to do so here, too, because of its importance and ease of being overlooked.

Putting aside my disagreements with (a) requirements being introduced in guidance statements and (b) the requirement being introduced without public comment, firms must be aware of this in order to implement it. Essentially the requirement is that if you discover a "material" change, which requires you to make an amendment to a presentation, you must disclose this in the presentation (i.e., you make a correction to a presentation, disclose that this was done).

I heartily object to this requirement and am hoping it will be removed (if you agree, you need to communicate this to the EC in your response to the GIPS 2010 disclosure draft). But in case it isn't removed, then you need to be prepared. Your P&P (policies & procedures) should reflect this requirement.

Friday, June 12, 2009

Understanding the intent

Patrick Fowler and I are still in Stockholm, having just finished our semi-annual European Performance Measurement Forum meeting, which was deemed quite a success, with much interaction and sharing.

As a result of some discussion I have come to realize that perhaps there's a difference between what is written and what is meant. Unfortunately, a road map might be needed to figure this all out.

Take, for example, the suggested change to not permit a firm to give a GIPS(R) presentation to anyone that is below the minimum. Someone suggested that you can give it to the prospect if they ask for it. What? Where is that stated? The rule is clearly written that you CANNOT give a presentation to someone below your firm's minimum. If the intent was that this applied unless it was asked for, that might be different (although I'd still object to this change, which is misleading as it is written in the disclosure draft, because it suggests that it's currently a recommendation when it isn't!), but that isn't what is written.

Also, what about proprietary assets? The definition includes funds of the firm, owners, and senior management, suggesting that if a senior manager is invested in a mutual fund, they're (a) proprietary and (b) need to be disclosed. We hear that the intent was to focus on seeded assets. Well, if that was what was meant, why wasn't that written? Again, are we to discern this from what we read? Again, in my opinion I'd STILL object to this proposed change but at least the intent would have been clearer.

My criticism shouldn't be misunderstood: the standards are quite complex, and I understand that sometimes our intent isn't clear from what we write (sometimes even my writing is misinterpreted, so I can be accused of calling the kettle black, so to speak). But, I would hope that further clarification would have been given so that the reader understands what is intended. Again, if these interpretations are valid, my opinion wouldn't be altered as I object to both of these proposed changes.

If you haven't yet read the disclosure document but plan to comment, you're running out of time. It's important: please comment!

Thursday, June 11, 2009

What happens if you use the wrong stats?

Continuing our discussion of Michael Lewis' Moneyball, I think there's a HUGE parallel between baseball statistics and what we do in investment performance measurement. Both deal with measuring performance: the performance of baseball players / the performance of money managers.

Lewis points out that for the first 150 years of baseball, the wrong statistics were used to evaluate the performance of players. As a result, the wrong ones were often rewarded and chosen, resulting in teams not doing as well as they had expected, given the "talent" they selected.

While investment performance's history is much shorter than baseball's (roughly 40 years), we have had the same challenges because we quickly adopted certain measures (e.g., standard deviation for risk, time-weighting for performance) which arguably are WRONG much of the time! And so, what's the consequence? Misleading information, misinterpretation of results, mis-allocation of resources.

Not everyone in baseball has "signed on" to the new statistics, although those who haven't will continue to suffer. The same can be said in our industry, where most firms have yet to see the wisdom of alternative measures. We should be glad that it hasn't taken us 150 years to figure out the error of our ways.

Wednesday, June 10, 2009

Moneyball & performance measurement

This morning Patrick Fowler and I arrived in beautiful (though wet) Stockholm, Sweden for the Spring meeting of the European chapter of the Performance Measurement Forum. This site marks the fourth Scandinavian country we've held meetings in (previous ones were in Norway, Denmark, and Finland). I brought along a variety of books to read including Michael Lewis' bestseller, Moneyball. I had read Liar's Poker, which I enjoyed immensely, as well as his most recent offering, Panic: The Story of Modern Financial Insanity. Moneyball had been recommended by several colleagues, and I finally decided it was time to read it. Little did I know that it would provide me with references to exploit in my writing and teaching.

Early in the book Lewis describes the baseball player draft and explains that planning meetings are "all about minimizing risk," (page 27) a topic near and dear to many of us in the investment world. But there's much more to borrow from this text.

What the book is essentially about is a totally new way to evaluate the performance of baseball players, from hitters to fielders to pitchers. Rather than rely on the traditional statistics (such as batting average), alternative ones are proposed (such as on-base percentage). He suggests that you shouldn't "believe a thing is true just because some famous baseball player says its true." (page 98) Made me think of our long-standing affinity for calculations such as time-weighted return and standard deviation. Those of us who have recognized the superiority of money-weighted returns have had, at times, a difficult time convincing others that it's better because of the universal acceptance of time-weighting, though we are making progress. He cites one baseball statistician who opined that "the world needs another offensive rating system like Custer needed more Indians." (page 80) I'm sure that this is the reaction many have to some of the new ideas that have been proposed. But at the end of the day one should ask "is the new measure better?"

Not surprising, Lewis provides some history on the origin of baseball statistics and credits Henry Chadwick who first developed some in the mid 19th century. "Chadwick was better at popularizing baseball statistics than he was at thinking through their meaning." (page 70) I will allow you to reflect on individuals in the world of performance measurement who might also be worthy of such a characterization. And while Lewis' suggestion that Chadwick "created the greatest accounting scandal in professional sports" (page 71) might seem a bit strong, I suspect that this statement, too, might apply to some in our industry. (If you're unconvinced about this, I suggest you read Taleb's The Black Swan)

Lewis points out that "there had been fitful efforts to rethink old prejudices" (page 71), which clearly applies to our industry as well. The old ways die hard. But, those of us who have seen the light will continue to press on, even if we have to turn to baseball for help!

Hopefully THIS forum will further foster the advancement of new ideas and approaches.

A new author & a new book!

Congratulations to our friend, David Cariño of Russell Investments, who (along with coauthors Jon Christopherson & Wayne Ferson) just released a new book: Portfolio Performance Measurement and Benchmarking (McGraw-Hill).

David is a member of the Journal of Performance Measurement's advisory board and is well known for the model he developed to link subperiod attribution results.

I obtained my copy last week and have just begun to review it. It covers a great deal of material and arguably should be on any performance measurement professional's bookshelf! We wish David and his coauthors much success with this new addition to the growing list of performance measurement books.

Tuesday, June 9, 2009

Normality of returns...are they?

As I make progress with an upcoming article I'm writing on standard deviation, I will occasionally share some of the information I discover. For the data I'm using the monthly returns for the S&P 500 for the 36-year period ending December 2008. I am first calculating the results for the 36-month period ending this date, to conform with what is proposed for GIPS 2010. I will also evaluate the longer period and probably a few other 36-month periods throughout this time frame.

I haven't tested completely for normality, yet, but did discover something of interest. For the 36-year period I determined that at the 97.5% confidence level there should be 11 returns outside the range of plus-or-minus 1.96 standard deviations: there are 25, more than twice the expected number. Most (15) are at the low end. Other authors (e.g., Anson, Mark (2002), The Handbook of Alternative Assets. (Wiley)) have found similar results.

One reasonable question that might arise is "does it matter?" Well, if your analysis is based on an assumption which is flawed, one would suspect that your conclusions might also be flawed, yes? There have been studies that have shown that in some circumstances the absence of a normal distribution isn't a problem. But I like to hearken back to a standard line that IBM used to use when someone would do something other than what was specifically called for: "unpredictable results may occur." Bottom line: we just don't know. In some cases there may not be a problem, but in other cases there will. Over the 36-year period I'm reviewing there are more results outside the boundary just at the low end than are predicted for the entire distribution, so wouldn't we expect our assumptions to lead to "unpredictable results," which likewise might call into question any of our conclusions?

Standard deviation remains the most criticized risk measure, and probably with much justification. We like things that are easily understood: standard deviation fits this bill. However, we also like things that work properly...unfortunately, it's unclear that we can say this holds.

Monday, June 8, 2009

If you don't vote, don't complain

One thing we've learned is that those who don't take the time to vote shouldn't complain about the results. The same applies to the proposed changes to the GIPS® standards. With less than a month before the opportunity disappears, only a few have submitted their comments. Since I delayed mine until last week, and know of others who are putting the finishing touches on their letters, we can expect this list to grow. But, in case you haven't yet taken any action, you're running out of time!

And, if you haven't read through what's proposed, perhaps a good use of your time would be to participate in our upcoming GIPS 2010 Webinar. On June 16, we will hold our second briefing on the major changes that have been proposed. This session will be immediately followed by a review of the results of our recent GIPS survey. The following day we will hold a session on GIPS verification. Both webinars are scheduled for 11:00 AM - 1:00 PM, Eastern Standard Time (U.S.). Both also provide time for Q&A. [Oh, and in case you can't make this time slot, you can always obtain a copy of the audio from the call!]

The fee for these events is quite low in order to encourage participation. The cost is per dial-in line, not per participant. This way, you can assemble a group of folks in your conference room to listen together for a very low price, and perhaps discuss your thoughts among yourselves after the call. And, we've made the cost for this month's webinars even lower because we're offering a "twofer" (two, for the price of one). PLUS, if you're a CIPM certificate holder, the price is even lower! To learn more and to sign up for this event, please contact Patrick Fowler (PFowler@SpauldingGrp.com); 732-873-5700.

Saturday, June 6, 2009

Derivatives ... are they really so horrible?

You've probably noticed the continued complaints coming out of Washington, DC regarding derivatives and how they obviously contributed to our current market condition. I recall reading how one elected official said that he didn't even know what a derivative was! Wow, how complex ARE they? While I might understand him saying he didn't understand how a credit default swap worked or what some of the other esoteric derivatives were used for, to make the statement at the most general level, "derivatives," suggests that the man is either ignorant or simply playing to the camera (or perhaps, bashing Wall Street, a popular game of late).

This morning I began reading a book on stochastic calculus (don't be too impressed ... I am reading the book 'cause I really don't understand the subject at all and need to; it's not for pleasure) and came across some comments regarding risk. One should realize that in many (if not most cases), derivatives are used to offset risk; granted, they can be used for speculation and other purposes, but risk control is often a chief reason for their development and use. This would seem to be a good idea, yes?

"A principal function of a nation's financial institutions is to act as a risk-reducing intermediary among customers engaged in production. For example, the insurance industry pools premiums of many customers and must pay off only the few who actually incur losses. But risk arises in situations for which pooled-premium insurance is unavailable." (Shreve, Steven E. Stochastic Calculus for Finance I. 2000) Okay, and so what do we do in these cases?

Take, for example, a hedge against higher fuel costs. Airlines, for example, would want a security whose value would rise when oil prices rise. But who would want to sell such a security? "The role of financial institutions is to design such a security, determine a 'fair' price for it, and sell it to airlines." (Shreve, 2000)

Let us not become fearful of things we don't understand.

Friday, June 5, 2009

Standard deviation...not so quick

Earlier this week I mentioned that I have finally sent my comments in re. GIPS 2010. Well, I find myself slightly modifying my view about the planned requirement for a 3-year annualized standard deviation ... okay, maybe more than slightly.

I'm working on a research article regarding this very topic (annualized standard deviation) as I contend that it is a misleading number which is challenging and difficult (and impossible?) to interpret. But, failing to have empirical evidence to support this view, I was at a loss to criticize the proposal. Well, one of the articles I am using for my paper is very much opposed to standard deviation: Brett Wander& Ron D'Vari, "The Limitations of Standard Deviation as a Measure of Bond Portfolio Risk." The Journal of Wealth Management. Winter 2003.

Don't be mislead by the title, the authors' criticism of standard deviation goes beyond merely its use with bond portfolios, although the specific issues with bonds provide additional concerns about the measure. The breadth of criticisms about standard deviation should cause us to wonder if this measure deserves to be "the chosen one" for the very important GIPS® standards. As a colleague recently explained, by requiring the use of standard deviation the standards are implying that it's the best risk measure, a suggestion many would challenge.

You're no doubt familiar with many of the criticisms of standard deviation: it requires a significant number of a data points, assumes a normal distribution, treats the above average returns the same as the below average, and so on. These authors bring up other issues, such as the questionable statistical significance of the measure, stating that even “five years of monthly data will only provide marginal statistical significance.”

In spite of the measure's shortcomings, given that it's commonly used (as per our firm’s research), easily obtained, and generally understood, there probably was no harm in employing it. Well, my research will challenge the last point and the authors question its overall validity as a measure.

So, what are we to do? Perhaps the standards should simply require a measure of risk, just as they require a measure of dispersion. The likely response would be “well, how can a consumer compare two managers if they use two risk measures? The simple answer could be “that’s not our problem”; the same challenge exists with dispersion (one manager might use high-low while another uses standard deviation). One response could be to suggest that they can, of course, require the managers to report risk using the same measure. But that would be between the prospect and the contenders for its business.

Bottom line, finding an appropriate risk measure is a difficult issue to deal with and there is no simple solution. While standard deviation may appear to be the solution, it’s fraught with its many inherent challenges that should cause us to pause before rushing forward. I remain somewhat ambivalent on this matter and so expect more to follow.

By the way, if you visit the GIPS website (www.gipsstandards.org/news/releases/2009/view_comments.html) you’ll find only 14 comment letters so far. Don’t wait too long to get yours in...it WILL count!

Thursday, June 4, 2009

Attribution of balanced portfolios

I was recently sent a note about the issue of calculating attribution on balanced portfolios. There seems to be a lot of confusion about something that really isn't that difficult.

A balanced portfolio consists of two or more asset classes. For our example we'll take the simple case of a portfolio with equities and fixed income. And so, how do we calculate attribution? We should begin by thinking of what questions we want to answer.

At the highest level, we want to know how the allocation decision worked; that is, how the allocation of the portfolio's assets across the two asset classes performed. We would expect to have a strategy or benchmark with which we will compare our performance. And so, if for example we have a strategy which calls for 60% equities and 40% bonds, but if we decided to underweight equities (with 50%) and overweight bonds (with 50%), then we want to determine if these decisions were good or not. And so, how do we accomplish this? We can easily employ one of the "Brinson" models (either the Brinson, Hood, Beebower (BHB) or Brinson, Fachler (BF)). The allocation effect tells us whether these weight adjustments were good or not. If the managers of the equity and fixed income portions were selected by us (e.g., in a fund-of-funds situation), then we will learn whether or not our selections were good ones.

We now move down to the asset class levels themselves, where we would calculate the attribution here using models approprirate for the respective asset classes (e.g., for equities a Brinson-type model will be fine; for fixed income, we would use a fixed income model).

I think that a lot of people think that attribution for balanced portfolios is somehow more complex; perhaps because they think that one must asses everything simultaneously, using a single model. But that isn't the case as it would only generate misleading information. One must always be sensitive to what the questions are and what asset classes we're dealing with.

Wednesday, June 3, 2009

GIPS 2010 ... my comments have FINALLY been sent

I've been commenting about the proposed changes to the GIPS standards for several months through our newsletter (http://www.spauldinggrp.com/services/resource-center/91-newsletters-pamphlets-a-white-papers.html). But finding the time to review the entire document has been a challenge. Well, I got motivated this week and got it done.

If you're suffering from insomnia and need a sleep aid, consider reading what I had to write. You can find the letter in our firm's resource center: http://www.spauldinggrp.com/services/resource-center/93-tips.html.

Oh, and please take the time to comment yourself. You have less than a month to let the GIPS Executive Committee know what you think. To obtain a copy of the exposure draft, see what others have written, and forward your copy onto the EC, visit http://www.gipsstandards.org/news/releases/2009/gips_2010_exposure_draft_open_for_public_comment.html.

Performance Analysis & Major League Baseball

I like the ESPN website (www.ESPN.com) as it provides great information about a variety of sports. Visiting it earlier today I was directed to a story about a player for the Philadelphia Phillies:


BP Daily: What's wrong with Rollins?
The Phillies star can pull out of his statistical slide. Here's how.


By Marc Normandin and John Perrotto
Baseball Prospectus


Many people thought Jimmy Rollins' 2008 season was disappointing. Little did they know he was capable of falling even further from the production levels he had set during his peak years. Now Rollins is hitting all of .232/.276/.345 this year, a far cry from last season's "disappointing" .277/.349/.437. What exactly has caused the 30-year-old Rollins to experience such a massive dip in production two years in a row?


Performance Analysis

As you can see, it has a button for "Performance Analysis." In our Introduction to Performance Measurement classes I often comment how much of what we do can be found in other industries. This was the first time I saw such a clear connection from the world of sports, however.

Performance analysis is important anywhere where performance counts. Granted, our industry may have taken it to levels that other industries can't possibly dream of, but it's still not unique.

By the way, the site address for the story is http://insider.espn.go.com/mlb/insider/news/story?id=4221330&action=login&appRedirect=http%3a%2f%2finsider.espn.go.com%2fmlb%2finsider%2fnews%2fstory%3fid%3d4221330

January 1, 2010 isn't far away (large cash flows)

While many GIPS-compliant firms are preparing for January 2011, they also need to be aware of what's in store for them in just seven months.

Most firms are aware of the new requirement to revalue portfolios for large cash flows (Para. 2.A.2.b.). But, have they looked at the broader picture? Let's take the case of a wrap fee manager who relies upon their program sponsors for the returns they use for GIPS purposes. Have they validated that these entities have adopted a calculation methodology that will comply with these new requirements? If not, there's not much time to get this addressed.

The standards for wrap fee allow managers to either use their own returns or rely on the sponsor, provided the sponsor adheres to the standards. This is a test case to ensure this is being done.

If you're involved with wrap fee programs and rely on your sponsors, confirm that the returns will meet these requirements...the sooner, the better!

Tuesday, June 2, 2009

Getting started

Greetings!

We've been discussing the idea of having a blog for some time. Granted, I already have a monthly newsletter (Performance Perspectives: http://www.spauldinggrp.com/services/resource-center/91-newsletters-pamphlets-a-white-papers.html), but given the dynamics of our industry we wanted a facility for me to comment on a more frequent basis...thus this blog!

Since blogging is new to me, you'll have to bear with me as we move forward. I'm sure we'll have some "kinks" to work out along the way. But be assured you will always receive my candid views, opinions, ideas, and insights, which we hope you'll find of value. And, as always we appreciate your input.

I will comment on what's going on with the performance measurement in general, the GIPS Standards, risk measurement, attribution, and lots more. I hope you find what I have to say of value.

Best wishes,
Dave