Friday, June 28, 2013

GIPS standards changes via the Handbook

We are discovering new changes to GIPS(R) (Global Investment Performance Standards) that have been introduced through the most unusual ways: not as part of the Standards, but simply as Q&As or notes. How are we to learn of these? I guess by reading the entire book (sorry, but I fall asleep when I try) and monitoring Q&As on a regular basis.

And so, what we will do is make you aware, most likely through The Spaulding Group's Newsletter, and occasionally through this blog, of changes that we encounter or that are reported to us. We'll begin with the July edition of the Newsletter, since the June will be published shortly. If you find any changes, please alert us, and we'll make sure they get posted.

The latest change we found (specifically, Jed Schneider discovered it) was the introduction of "sunset rules."  You may recall that I suggested that these be incorporated into GIPS 2015 (which won't happen, of course). But, someone decided to slip them in. Who that may be is unknown; all we know is they're there. For example:

While I am pleased to see them being introduced, why wasn't the public given the opportunity to review and comment? That was the old way: changes to the Standards are put out for public comment. But apparently that's no longer the case.

What is the basis for this? Is this just one person's idea, the work of a committee, or what? This is CLEARLY a change; but instead of being IN the Standards, it's in a note. Which, you have to go looking for.

And so, if you discover items like this, please just send me an email, and I'll make sure they get published so folks can learn about them.

p.s., a colleague who is "in the know" pointed out that there was actually an earlier Q&A that addressed this subject:

How long does a composite name change disclosure have to be presented?

The disclosure must be included in the composite’s compliant presentation for as long as it is relevant to support the performance presentation. The disclosure must be included for a minimum of one year and potentially for more than one year if the firm determines the disclosure is still relevant and meaningful. The firm must consider the underlying principles of the Standards of full and fair disclosure when determining a course of action.

  • Categories: Disclosures
  • Date Added: January 2008
  • Source: The GIPS Standards 2005 edition
I was unaware of this item and would have reacted in a similar way: where in the Standards is it permitted for a firm to determine "relevance"? That's a totally new concept to me: relevance! I know we speak of "materiality," but "relevance"? And so, can we extend this to all items? That is, an item is required ONLY IF IT'S RELEVANT! The U.S. Supreme Court (as well as courts from many nations) use precedents to determine what action to take or allow. To me, THIS set a precedent; don't you agree? It clearly gives firms the option to determine "relevance" when it comes to provisions.

Also, since this was published prior to the 2010 edition, why wasn't this wording brought into the Standards themselves? It's clearly a new rule; shouldn't it be more visible?

I'll have more to say on this.

Thursday, June 27, 2013

Get ready for a bump in 5-year return numbers

When I was mayor of the Township of North Brunswick (NJ: 2000-2003), I commented how I could envision the huge potential benefits that would accrue in 2010, when the town's debt obligations would drop significantly (the idea of offering a sizable tax decrease to the residents was exciting to me). Sadly, that didn't occur, as new debt was raised to replace the old (plus, I was long out of office, and had no influence on the budgeting).

Next year many asset managers will, as they've done for years, report their five year cumulative and annualized rates of return. The good news: 2008 won't be included! For many managers, 2008 was a disaster. But come 2014, the five-year numbers will include 2009, 2010, 2011, 2012, and 2013. The 2008 negative returns will, of course, appear in the 7- and 10-year returns (which many show), but drop off the 5-year statistics.

Many will see HUGE changes to these 5-year numbers. For example, one of our clients had returns in one of their strategies that were approximately 19% (2012), -11% (2011), 30% (2010), 10% (2009), and -46% (2008). The 5-year cumulative return is -18.22%. If this year's return is 0.00%, their 5-year return (with 2013 and without 2008) will jump to 51.45%; a swing (from negative to positive) of almost 70 percent!

What impact will this have on investors? Will investments in riskier assets grow, as ex ante measures forget about that prior history, and only look at the most recent and more buoyant times?

We're already witnessing an increased appetite for risk; in some cases, it's because individuals and institutions want to regain some of the money they lost; in other cases, it's probably the sheer desire to see the return of sizable double-digit returns. Regardless, seeing the hugely revised 5-year returns come 2014 will probably increase this interest. It'll be interesting, no doubt.

Wednesday, June 26, 2013

Extending GIPS to asset owners

I wrote an article for Pension & Investments regarding the expansion of the GIPS(R) standards (Global Investment Performance Standards) to plan sponsors. It was recently published and appears in the June 10 edition.

I am very excited that it got published; not just because I enjoy writing and am glad they felt it worthy, but also because it adds further support to compliance, which I think is a good thing.

Who's on zero?

In this month's soon-to-be-published Spaulding Group newsletter, I comment a bit about the decision not to have a 2015 edition of the Global Investment Performance Standards (GIPS(R)), and provide the results (meager as they may be) from a mini (talk about an accurate term!) survey we conducted. I also cite comments from two colleagues, one who remained nameless and the other, who we'll call Carl, because, well, that's his name! (Carl Bacon)

In last month's issue, I commented how one change I'd make would be to renumber the sections; today we have a section zero. Carl acknowledged that this numbering idea was his (recall that he also favors geometric attribution, opposes money-weighting, and insists on driving on the wrong side of the road), and defended by referencing the presence of "Floor 0" in some hotels (in London, of course; n'er in the US of A) and how a stop was added on the King's Cross station, which was numbered zero.

I decided that I would comment a bit here, and thought about Abbott & Costello's famous "Who's on First" skit or routine:

The number zero has a very clear meaning which is nothing. That is, it means nothing (now I'm sounding like Abbott!). Of course it has a different meaning when paired with other numbers (e.g., buying a car that sells for $50,000); clearly the zeroes here don't mean "nothing" (well, technically they do, of course, if we were to do the math this way: 0×1 + 0×10 + 0×100 + 0×1,000 + 5×10,000).

There's a simple reason why first base in baseball is called "first base." Because it's first.

If you visit Carl's home (which I understand is quite historic and almost as old as our friend Steve Campisi), I doubt that he would say, upon entering, "this is our floor zero." No; he'll tell you "this is our first floor."

I'm at the Marriott in Stamford, CT this week, conducting (coincidentally) a GIPS verification, and will admit that their first floor is not technically the first floor; they use this (as they do in England) to represent the first floor that rooms are on. BUT, do they call the first floor (that you enter when you walk into the hotel, which is not numbered "1") floor zero? No, it's the "lobby level."

When a baby is born, do we say that they've begun their 0-th year? No, it's the start of their first year. Do children begin school in grade zero? No, in the first grade; which, in many cases, is preceded by Kindergarten and pre-K.

In the U.S. we occasionally see exit numbers changed on highways, which can be confusing to many who have grown used to the numbers (New Jersey is well known for us residents referring to the NJ Turnpike or Garden State Parkway exit our home is near). However, in time we get used to the new numbering, and may forget that there was anything that was done previously.

When merely introducing a new exit, where the exits it's going between are numbered sequentially (e.g., 8 and 9), it's common to add a letter to the new exit, with one of the ordinal numbers (e.g., 8A, which, for example, is the case on the New Jersey Turnpike). If New Jersey decided to add an exit between the start of the turnpike and exit one, I suspect they'd have to scramble to come up with a number (perhaps they'd make it 1A), but I doubt very strongly that we'd see Exit 0.

The differences between the USA and UK, at times, seem to grow. There are certain things we all know about (the differences in the side of the road we are on, the fact that we have an accent and they don't, the differences in how certain words are spelled (e.g., "color" vs. "colour"), the different ways we pronounce certain words (often we use a long vowel and they use a short, or vice versa), the different words we use for certain things (my car has a trunk, while Carl's has a boot), and no doubt much more). The use of the number zero for a multitude of ways is yet another.

The number zero has a fascinating history, and there are books that discuss it (in case you're interested and are looking for a "summer" book to read).

Tuesday, June 25, 2013

Risk-adjusted attribution

Ernie Ankrim's March-April '92 FAJ "Risk-Adjusted Performance Attribution" article is one that I came across in my doctoral research. So far it's the only one I've found that addresses this topic (I reached out to Ernie in the hopes he'll pen one for The Journal of Performance Measurement(R)).

The fundamental question: should attribution be measured against risk-adjusted returns?

Traditional attribution models (e.g., Brinson Fachler) assume that the portfolio and benchmark have the same risk, and attribute the excess return to three effects (allocation, selection, and interaction). But what if the risks are different? Then, the excess return may be partly attributable to the risk delta. Ankrim's idea is that by looking at the risk-adjusted returns, we've eliminated the risk differences and look solely at the contributions from the manager's decisions.

This article is more than 21 years old. But who does risk-adjusted attribution? Should we rethink our approaches?

In The Spaulding Group's attribution class I sometimes make reference to this idea, and was pleased to see Ernie's article. Perhaps this is a topic worthy of more discussion; what do you think?

p.s., I was reminded that Andrew Kophamel wrote an article titled "Risk Adjusted Performance Attribution: A New Paradigm for Performance Analysis" for The Journal of Performance Measurement(R). I will have to re-read it!

Friday, June 21, 2013

More on "best practice"

The EMEA chapter of the Performance Measurement Forum held its Spring meeting in Brussels this week, and as usual, it was fun and informative. Because of the frequent use of the term "best practice," I asked the members to offer their definitions of this expression. Not surprisingly, we got a variety of responses.

In my view, the term SHOULD MEAN the best approach among the options available. The question is, WHO DECIDES what this is? Further, are they open to:
  • feedback
  • criticism
  • insights
  • ideas
  • other thoughts
  • opposition
  • objections
  • approval
  • etc.?
The CFA Institute's client reporting committee is promulgating  "best practices," but is clearly not open to any of this, which is unfortunate. And while the GIPS(r) (Global Investment Performance Standards) Executive Committee (EC) is open to feedback, there is no requirement or expectation that they will adjust their ideas based upon what they learn.

Take for example the idea of sending clients the presentation(s) for the composite(s) they're in on an annual basis. When the idea was introduced, there was extensive opposition; but, it is included as a recommendation. And, since by definition "recommendations" are "best practice" (see the Standards' glossary), it is BEST PRACTICE for you to do this (if you're compliant, of course). Personally, I think the idea is ludicrous, as do many others. But, someone decided it's best practice.

When we use the term, just as when we use other terms that have multiple meanings or interpretations, we should be prepared to explain what we mean. In the case of GIPS and the reporting standards (principles, sorry), "best practice" means what a group of folks think is best. Groups, by the way, which we neither elected nor, in at least one case, know the complete identities of.

While you might grow tiresome of my occasional harping on this matter, given the constant, continuous, and frequent use of the term (especially without qualification), I believe it's important that folks know and appreciate what's occurring and what's meant.

Most of our clients strive to adopt the best practices, which is, I believe, "best practice." But, knowing what these are (e.g., the above cited recommendation) allows the firm to decide whether they agree with all that are put forward. They, like me, may question someone else's judgment and beliefs. Given the paucity of firms that do send their clients their respective composite presentations annually, it's evident that most folks disagree with the GIPS EC, at least on this matter.

It would be interesting, would it not, to find out if the members of the EC (current and past) have adopted all of the recommendations themselves (including the annual report distribution) or if they're a verifier or consultant, strongly encourage their clients to do so.

Thursday, June 20, 2013

Hedge funds & GIPS compliance

I have written about this topic in the past: that is, how, in most cases, hedge funds should find compliance with the Global Investment Performance Standards (GIPS(R)) pretty simple. However, as a result of an email conversation with The Spaulding Group's first Africa-based verification client, a few additional items were identified that make compliance straightforward.

Our client was concerned about the justification for certain policies and procedures which simply won't apply. For example, the timing of accounts going in and out of the composite. Since the composites will consist of partnerships, and since GIPS isn't concerned with partners, per se, (because it's the partnership that constitutes the portfolio, not the underlying owners; just as with mutual funds) this policy will be non applicable, other than for the timing of the partnership going into the composite.

What about significant cash flows? First, there is no need for a policy, since this is an option. But it's highly unlikely that the hedge fund would adopt such a policy, so it's simply "n/a."

How about rules for declaring accounts  non-discretionary? Since the partnerships are constructed by the firm, they would all be discretionary. Granted, there may be cases when a hedge fund creates a separate portfolio for a large client, who wishes to be managed in a manner similar to the partnership, but without being in the partnership; or, if they want a "twist" on the strategy, and so rules may, at some point, apply. But initially it's fine for the firm to simply state that they have no restrictions.

Because of the inherent complexities of hedge funds, there often seems to be the belief that compliance with GIPS will be quite a challenge; we believe that it shouldn't be, and recently ran an advertisement addressing this:

Tuesday, June 18, 2013


Last week was extremely hectic, with travel and The Spaulding Group's fourth annual PMAR (Performance Measurement, Attribution & Risk) Europe conference in London (which was a HUGE success, by the way (thanks for asking!)). The event itself provided me with material, but I found it difficult to put the time aside. Well, here's a start and resumption.

Stefan Illmer, PhD returned to PMAR once again, this time to do "double duty," as he spoke about holdings-based risk attribution as well as updated us on the new CFA Institute client reporting standards principles. Stefan is fully aware of my own "hang ups," concerns, and objections to what's going on, but was kind enough to visit us. We think it's important for our attendees to be kept appraised of what's occurring, and he does an excellent job for us. (Carl Hennessy, CIPM provided a similar update at PMAR North America last month).

Just a brief word for now: the document speaks of transparency, which is a good thing (I think, for example, we're quite transparent regarding our thoughts and opinions, as well as the sharing of information).

Is it not therefore a bit ironic that
the committee's makeup is a secret?

Review their first report: nowhere are the names mentioned? Contrast this with the GIPS(R) (Global Investment Performance Standards) standards, where the Executive Committee members are always listed. Look on the website: can you find the list of names? We've asked for them, but for some reason they won't be published or made available. We're assured that the members are all highly qualified, which I have no doubt they are; but who are they? Where's the transparency into this group's makeup, plans, etc.?

How can you promote transparency without being transparent?

Seems kind of strange, doesn't it?

Saturday, June 8, 2013

Getting quoted

When I was in politics I was quoted almost weekly, about one thing or another. And I quickly learned that despite all that you may share with a reporter, you cannot and never will control what is written. Sometimes, what gets quoted is not what you had hoped, while at other times you're quite pleased with what's been shown. As my friend, Steve Campisi, put it, "As long as they spell your name right, and you have a strong and clearly worded opinion that is reasonable, then even if you come across too strong you're still the better for having been quoted." Reporters have limited space, and often seek input from multiple people. And so, while they may speak with you for 20-30 minutes, you may get a single line in the article.

I was quoted in this weekend's WSJ (page B9) about private equity valuations. Everything that was attributed to me is accurate. The reporter was kind enough to send me what he intended to include so I could have a look. Overall, I think it's an excellent piece and brings to the attention of investors some of the risks and issues of investing in less liquid assets. I will share a bit more here on this subject.

I referenced the GIPS(R) standards (Global Investment Performance Standards) as a guide for valuations. I mentioned the three levels for private equity and the broader valuation principles for other valuations. I believe these hierarchies are "best practice" to value securities; especially less liquid ones.

When dealing with illiquid assets, valuations can always be tricky; they call for good judgment. While there are guidelines and standard approaches available, the accuracy is always questionable. During the most recent financial crisis, many individuals recognized that the valuations on their MBS securities were high, but could do little about it. I recall that then Congressman Barney Frank suggested that firms who question the accuracy of prices be permitted to show two: what's quoted and what they believe is true.

I suspect that most private equity investors do a good job of pricing their assets. If anything, they may be conservative and underprice them, knowing that when the asset eventually goes public, they would prefer a higher than lower return. I used a line made famous by President Obama's former pastor, "when the chickens come home to roost," as a metaphor for the eventual sale of an asset. If the manager had underpriced it, then when the sale occurs the reality will set in. This should be incentive to avoid intentional underpricing of assets. No doubt our industry will always have its share of fraudsters, thus the suggestion that investors understand the rationale behind valuations.

Thursday, June 6, 2013

Those who can, do; those who can't, teach

You are probably familiar with the phrase in today's post title. It's clearly a "shot" at teachers and professors, is it not?

Its presence was inspired by Steve Campisi's retort to yesterday's post. It was evident that he is, at least at times, uncomfortable with the ideas that come out of academia. I think there is some validity to his position, and perhaps it's worth some discussion.

Three individuals have been named to the inaugural class of The Performance and Risk Measurement Hall of Fame:
  • Gary Brinson
  • Peter Dietz
  • Bill Sharpe
Brinson is a practitioner who, for us in the world of performance measurement, is known for the attribution models he helped develop. Dietz worked for Frank Russell, and so can be described as a practitioner, though he did spend some time in academia, I believe. His legacy is the concept of time-weighting and the formulas he developed to measure performance. And Sharpe is known for CAPM and his risk-adjusted measures; he is clearly from the academic side.

In writing my doctoral dissertation (which will soon (hopefully) be defended), I have cited more than 100 articles. There is an expectation that most come from academic journals (e.g., the Journal of Finance). And while there are many that are included, the reality is that most come from practitioner publications (e.g., The Journal of Performance Measurement).

Many investment professionals regularly read academic journals, and probably get inspiration from them. As practitioners, should we generally dismiss their ideas or consider them? What degree of influence should they have on what we do?

If you've read Nassim Taleb's The Black Swan, you're then familiar with his total disregard for the likes of Sharpe and Markowitz. Sharpe's CAPM has not been proven (and in fact, is often criticized, even by academics), and  yet is still typically part of finance courses, MBA programs, and doctoral studies. Taleb finds great fault with Sharpe and Markowitz, and suggests that they should return their Nobel prizes. How valid are his arguments?

What should the sources of models and formulas be that the industry uses? You may recall that AIG reportedly paid a Yale academic quite a lot of money annually to develop and maintain a model for their credit default swap investments. As it apparently turned out, this model never met a CDS it didn't like. And, we're aware of some of the problems that befell AIG. Long Term Capital Management (LTCM) employed several academics, including Nobel prize winners. Roger Lowenstein (in, When Genius Failed) pointed out how these individuals did not help in making LTCM a long term company.

While this may be an academic (pardon the pun) subject, it may be worthwhile to chat about it, nonetheless.

Wednesday, June 5, 2013

Learning to take the good with the bad

In the course of some research I came across a working paper by A. Basso and S. Funari (dated September 2001) titled "A generalized performance attribution technique for mutual funds."

Please don't let the title fool you: this isn't about performance attribution. Rather, it's about risk-adjusted performance. But putting that aside ...

In the article they generally describe measures such as the Sharpe and Treynor ratios as "numerical indexes ... that take into account an expected return indicator and a risk measure and synthesize them in a unique numerical value." I happen to think that this description is excellent. The notion that the returns and risk measures are synthesized into a unique numerical value. Isn't that an excellent description? Okay, so that's the good.

The "bad," in my view, is how the authors describe some of the results they obtained during their analysis. They used data from Italian mutual funds. During the period observed, "two bond funds ... exhibit a negative mean excess return; this entails that the values of the Sharpe, reward to half-variance and Treynor indexes for these funds are negative and, above all, meaningless. In fact, when the excess return is negative these indexes can be misleading, since in this case the index with the higher value is sometimes related to the worse return-to-risk ratio."

We've taken this matter up before; the realization that negative excess returns yield confusing Sharpe and Information ratios (as well as, apparently, other risk-adjusted measures). To refer to these results as "meaningless" and "misleading" is unfortunate, I believe. I'll confess my own struggles with this, but believe that my earlier explanation (see for example the January 2012 edition of The Spaulding Group's monthly newsletter) provides some insights into the value and interpretation of the results.

When things don't make sense, sometimes additional time is needed to reflect upon them. Again, I'll confess my own typical impatience with such things. But time spent on these events can prove beneficial.