The Spaulding Group's April newsletter was just published, and it includes my "wish list" for GIPS(R) (Global Investment Performance Standards). Anticipating that some may be thinking of ways to improve the Standards, I thought sharing some of my ideas might be helpful.
Please feel free to chime in!
Thursday, April 25, 2013
Wednesday, April 24, 2013
Performance Measurement Forum Meeting in Boston This Week
Our North American "Chapter" of the Performance Measurement Forum spring meeting is this week in Boston, the site of last week's horrific and shameless attack. The meeting is at the Marriott in the Back Bay, not far from the explosions.
A few years ago we held a meeting in New Orleans, after Katrina devastated the region, as a show of support for the local economy and residents. While there, several of us engaged in a community service project.
This week's session has been planned for months, so there was no response to the event.
No doubt there will be some discussions about what occurred, though performance and risk will be the principal topic; one that will surely offer fodder for this blog.
p.s., want to join the forum? The North American chapter is essentially closed to new members, but the European has a few openings. To learn more, please contact Patrick Fowler.
A few years ago we held a meeting in New Orleans, after Katrina devastated the region, as a show of support for the local economy and residents. While there, several of us engaged in a community service project.
This week's session has been planned for months, so there was no response to the event.
No doubt there will be some discussions about what occurred, though performance and risk will be the principal topic; one that will surely offer fodder for this blog.
p.s., want to join the forum? The North American chapter is essentially closed to new members, but the European has a few openings. To learn more, please contact Patrick Fowler.
Friday, April 19, 2013
Dirty Harry as a role model for Type I and Type II errors
You're probably familiar with the statistical concept of type 1 and 2 errors. One way to look at it is that one addresses the case where you think you're right, but you're wrong; the when you think you're wrong, but you're actually right.
A thought occurred to me last night that the classic Clint Eastwood movie Dirty Harry shows this in two parts (warning, graphic content):
:
In one case, the criminal was thinking that Harry had fired only five bullets, but he was wrong, he had fired six, and his gun was empty. In the other case he thought he had fired six bullets, but he was wrong, too, as Harry had one bullet left. But, which was the worse of the two errors?
In evaluating Type I and II errors, it's helpful to investigate the impact of both errors, to determine which, if we had to, we'd prefer to make.
I'm finalizing a study on attribution, where, among other things, I've found cases where holdings-based attribution can cause effects to show the wrong sign. For example, we might see a positive allocation effect, when it should be negative, or a negative allocation effect when it should be positive. Both are errors, but they mean different things.
If we mistakenly report allocation that actually hurt performance as being positive (i.e., contributing to performance), then we're misrepresenting our skill to the client, telling them that we did something right, when we didn't.
If we report allocation as a negative, meaning it hurt performance, when in reality it was positive, contributing to the outcome, then we're hurting the manager. One might even suspect that a manager could be terminated for failing when they actually succeeded.
Again, both are errors, but which is worse? I guess it's a classic case of, it depends.
A thought occurred to me last night that the classic Clint Eastwood movie Dirty Harry shows this in two parts (warning, graphic content):
:
In one case, the criminal was thinking that Harry had fired only five bullets, but he was wrong, he had fired six, and his gun was empty. In the other case he thought he had fired six bullets, but he was wrong, too, as Harry had one bullet left. But, which was the worse of the two errors?
In evaluating Type I and II errors, it's helpful to investigate the impact of both errors, to determine which, if we had to, we'd prefer to make.
I'm finalizing a study on attribution, where, among other things, I've found cases where holdings-based attribution can cause effects to show the wrong sign. For example, we might see a positive allocation effect, when it should be negative, or a negative allocation effect when it should be positive. Both are errors, but they mean different things.
If we mistakenly report allocation that actually hurt performance as being positive (i.e., contributing to performance), then we're misrepresenting our skill to the client, telling them that we did something right, when we didn't.
If we report allocation as a negative, meaning it hurt performance, when in reality it was positive, contributing to the outcome, then we're hurting the manager. One might even suspect that a manager could be terminated for failing when they actually succeeded.
Again, both are errors, but which is worse? I guess it's a classic case of, it depends.
Thursday, April 18, 2013
How did Bernie get away with it? What's the role of the verifier in identifying fraud?
While meeting with a London-based verification client, the subjects of fraud as it relates to GIPS(R) (Global Investment Performance Standards) and Bernie Madoff came up. They are related, in a way.
How did Bernie get away with it? To put it simply he (a) had a very solid reputation on Wall Street and was highly respected, thus any suggestion that he, of all people, would do anything improper seemed ludicrous and (b) he had a "closed system," meaning he did everything for his clients (brokerage, trading, portfolio management, custody); thus, there were no "checks and balances" on what he was reporting.
Over the past five years or so, we've learned of a few asset managers who claimed compliance with GIPS and had been verified, but who also committed fraud. "GIPS verification is not designed to detect fraud." But is that all that we can say? I think not.
If a verifier fails to detect fraud, is that acceptable, given that verification isn't designed for that purpose? I guess it depends on what was going on which perhaps they might have been expected to notice.
At one point in the movie "The Jerk," Steve Martin (Navin) plays a gas station attendant. If you saw the movie, you'll recall that he quickly established that he's not terribly bright. A customer comes in who, as I recall, is Hispanic; he gives Navin a credit card which was obviously stolen (since it had what appeared to be Jewish woman's name). Well, even Navin managed to realize that a crime was occurring.
The television show "Hogan's Heroes" often had Sgt Schultz proclaiming "I see nothing," when the prisoners were behaving in a manner that was beyond what would have normally been allowed.
I wouldn't favor verification being extended to the point where verifiers are expected to detect fraud. However, I think verifiers need to be "on guard" to the possibility that fraud might be committed.
Not all improper behavior is as obvious as Schultz and Navin were confronted with. "Would a reasonable person (or, perhaps more accurately, a reasonably qualified verifier) be expected to discover a problem in the course of their work?" That is what needs to be assessed, in the event fraud is later detected and reported.
How did Bernie get away with it? To put it simply he (a) had a very solid reputation on Wall Street and was highly respected, thus any suggestion that he, of all people, would do anything improper seemed ludicrous and (b) he had a "closed system," meaning he did everything for his clients (brokerage, trading, portfolio management, custody); thus, there were no "checks and balances" on what he was reporting.
Over the past five years or so, we've learned of a few asset managers who claimed compliance with GIPS and had been verified, but who also committed fraud. "GIPS verification is not designed to detect fraud." But is that all that we can say? I think not.
If a verifier fails to detect fraud, is that acceptable, given that verification isn't designed for that purpose? I guess it depends on what was going on which perhaps they might have been expected to notice.
At one point in the movie "The Jerk," Steve Martin (Navin) plays a gas station attendant. If you saw the movie, you'll recall that he quickly established that he's not terribly bright. A customer comes in who, as I recall, is Hispanic; he gives Navin a credit card which was obviously stolen (since it had what appeared to be Jewish woman's name). Well, even Navin managed to realize that a crime was occurring.
The television show "Hogan's Heroes" often had Sgt Schultz proclaiming "I see nothing," when the prisoners were behaving in a manner that was beyond what would have normally been allowed.
I wouldn't favor verification being extended to the point where verifiers are expected to detect fraud. However, I think verifiers need to be "on guard" to the possibility that fraud might be committed.
Not all improper behavior is as obvious as Schultz and Navin were confronted with. "Would a reasonable person (or, perhaps more accurately, a reasonably qualified verifier) be expected to discover a problem in the course of their work?" That is what needs to be assessed, in the event fraud is later detected and reported.
Monday, April 15, 2013
Hooked on phonetics
Serving in the armed forces provides many benefits. For me, it meant spending 39 months in Hawaii (which my wife and I jokingly refer to as my "hardship tour") and another year+ in Oklahoma (I was on active duty at the tail end of the Vietnam War (when you couldn't "buy a ticket" there; there were no other armed conflicts, so for the most part I served during "peacetime"). It also paid for half my undergraduate degree (I had an ROTC scholarship), and most of my two masters degrees (through the GI Bill). It also provided me the opportunity to have pretty significant responsibilities for a young 20-something, fresh out of college.
Another benefit is that you learn the phonetic alphabet. What's that? It is a way to spell words using words for each letter. For example, to spell the word "cat," I'd say Charlie, Alpha, Tango. Why do we do this? Well, when you're speaking over a radio and want to communicate something clearly, many of our letters can sound alike (e.g., n and m; b, c, d, e, and t). You've no doubt heard people do this, with words they just makeup; e.g., to spell cat they may say Camera, Apple, Tomato. If you watch WWII movies, you'll hear a slightly different version (Able, Baker, Charlie, Dog ...).
I can usually tell someone who has been in the military, because they'll also spell "the military way." The full alphabet is: Alpha, Bravo, Charlie, Delta, Echo, Foxtrot, Golf, Hotel, India, Juliet, Kilo, Lima, Mike, November, Oscar, Papa, Quebec, Romeo, Sierra, Tango, Uniform, Victor, Whiskey, Xray, Yankee, Zebra. And, the number nine has a special way of identifying, too (niner)!
Being able to spell phonetically can come in handy, yes? When you're trying to spell your name, for example, to someone. For me, it's simply Sierra, Papa, Alpha Uniform, Lima, Delta, India, November Golf.
So, what's the point of this lesson on phonetics? It's a standard, which everyone who needs to knows and understands. You will never hear someone in the military concocting their own version, or substituting other words for letters. It's a known and clearly defined standard. The words were chosen to avoid confusion. Communicating is serious business; it's a subject that has had a lot said and written about it. When one is going into the service, it's something that you're drilled on, so that for the rest of your life you are able to spell phonetically; you don't forget this stuff.
Standards serve a purpose; if they're written well and clearly, they're able to be adhered to. Somethings are more challenging to define, however. Letters are simple. But, when it comes to the business of performance measurement, with all of its variations, it's not surprising that we run into occasions when the rules aren't overly clear; thus, we need some interpretation. There's nothing wrong with that, provided that the principles of the standards are adhered to.
Another benefit is that you learn the phonetic alphabet. What's that? It is a way to spell words using words for each letter. For example, to spell the word "cat," I'd say Charlie, Alpha, Tango. Why do we do this? Well, when you're speaking over a radio and want to communicate something clearly, many of our letters can sound alike (e.g., n and m; b, c, d, e, and t). You've no doubt heard people do this, with words they just makeup; e.g., to spell cat they may say Camera, Apple, Tomato. If you watch WWII movies, you'll hear a slightly different version (Able, Baker, Charlie, Dog ...).
I can usually tell someone who has been in the military, because they'll also spell "the military way." The full alphabet is: Alpha, Bravo, Charlie, Delta, Echo, Foxtrot, Golf, Hotel, India, Juliet, Kilo, Lima, Mike, November, Oscar, Papa, Quebec, Romeo, Sierra, Tango, Uniform, Victor, Whiskey, Xray, Yankee, Zebra. And, the number nine has a special way of identifying, too (niner)!
Being able to spell phonetically can come in handy, yes? When you're trying to spell your name, for example, to someone. For me, it's simply Sierra, Papa, Alpha Uniform, Lima, Delta, India, November Golf.
So, what's the point of this lesson on phonetics? It's a standard, which everyone who needs to knows and understands. You will never hear someone in the military concocting their own version, or substituting other words for letters. It's a known and clearly defined standard. The words were chosen to avoid confusion. Communicating is serious business; it's a subject that has had a lot said and written about it. When one is going into the service, it's something that you're drilled on, so that for the rest of your life you are able to spell phonetically; you don't forget this stuff.
Standards serve a purpose; if they're written well and clearly, they're able to be adhered to. Somethings are more challenging to define, however. Letters are simple. But, when it comes to the business of performance measurement, with all of its variations, it's not surprising that we run into occasions when the rules aren't overly clear; thus, we need some interpretation. There's nothing wrong with that, provided that the principles of the standards are adhered to.
Friday, April 12, 2013
What approach needs more data, holdings or transaction-based attribution? (Part II)
Earlier this week I mentioned that a colleague told me that transaction-based attribution requires a lot more data, which opens it up to the risk of errors being introduced. Two quick responses arise:
1) Is it true that transaction-based attribution requires more data than holdings?
2) How dirty is the data?
We discussed the first; now briefly the second.
Some have argued that given the increased volume of data needed to support transaction based attribution, you'd have a greater likelihood of data errors, which could corrupt your report; therefore, you should stick with holdings-based, which, following this logic, has less data, therefore has a lower likelihood of dirty data.
As was pointed out on Tuesday, the first point (that transaction-based attribution requires more data) seems to be invalid. But let's still consider the data's integrity.
Most asset managers reconcile their portfolios regularly with their official books and records. We also know that if transactions are "dirty," they won't settle. Yes, we know errors can and do occur, but they're typically corrected. I see this as a spurious argument, too.
As will be pointed out in my dissertation, there is significant justification for transaction-based attribution; but, you'll have to wait for the movie, based on the dissertation (kidding, of course; an article) to get the details behind this claim.
1) Is it true that transaction-based attribution requires more data than holdings?
2) How dirty is the data?
We discussed the first; now briefly the second.
Some have argued that given the increased volume of data needed to support transaction based attribution, you'd have a greater likelihood of data errors, which could corrupt your report; therefore, you should stick with holdings-based, which, following this logic, has less data, therefore has a lower likelihood of dirty data.
As was pointed out on Tuesday, the first point (that transaction-based attribution requires more data) seems to be invalid. But let's still consider the data's integrity.
Most asset managers reconcile their portfolios regularly with their official books and records. We also know that if transactions are "dirty," they won't settle. Yes, we know errors can and do occur, but they're typically corrected. I see this as a spurious argument, too.
As will be pointed out in my dissertation, there is significant justification for transaction-based attribution; but, you'll have to wait for the movie, based on the dissertation (kidding, of course; an article) to get the details behind this claim.
Thursday, April 11, 2013
Caught between that proverbial "rock and a hard place."
Here's a bit of a challenge for you.
GIPS(R) (Global Investment Performance Standards) includes the following rule:
What if you're an SEC (U.S. Securities & Exchange Commission) registered firm that has a "40 Act" mutual fund, that has additional fees removed; do you disclose this?
GIPS also has the following rule:
How does this fit in?
Well, there's an SEC "No Action Letter" which, while allowing firms to "gross up" their mutual funds, limits, to some extent, the disclosure of mutual fund membership in composites. If you state something like "We can't disclose that additional fees were removed, because then we'd be letting people know there's a mutual fund in this composite," then you're still letting people know, by virtue of this disclosure, right?
Okay, perhaps I'm taking this to an extreme, but I have no problem if a firm prefers not to disclose that additional fund fees have been deducted, so as to avoid a potential conflict with "the regulator!"
Thoughts?
GIPS(R) (Global Investment Performance Standards) includes the following rule:
What if you're an SEC (U.S. Securities & Exchange Commission) registered firm that has a "40 Act" mutual fund, that has additional fees removed; do you disclose this?
GIPS also has the following rule:
How does this fit in?
Well, there's an SEC "No Action Letter" which, while allowing firms to "gross up" their mutual funds, limits, to some extent, the disclosure of mutual fund membership in composites. If you state something like "We can't disclose that additional fees were removed, because then we'd be letting people know there's a mutual fund in this composite," then you're still letting people know, by virtue of this disclosure, right?
Okay, perhaps I'm taking this to an extreme, but I have no problem if a firm prefers not to disclose that additional fund fees have been deducted, so as to avoid a potential conflict with "the regulator!"
Thoughts?
Wednesday, April 10, 2013
What happens when you don't follow the rules???
Often, firms and individuals who don't behave in accordance with rules, regulations, or standard practices, somehow get away with it. There's no way to tell how often this happens, though we know it does.
But, there are times when you they caught, and this can be a problem.
Last week, the U.S. Securities & Exchange Commission (SEC) filed an action against ZPR Investment Management, Inc., regarding their advertising practices. They specifically reference conflicts with the Global Investment Performance Standards' (GIPS(R) advertising guidelines.
While there is no requirement for a verifier to look at their client's advertising materials, most verifiers, like The Spaulding Group, I'm sure, will be happy to do so.
Most firms have their CCO (Chief Compliance Officer) review and approve advertisements. But unless he/she is familiar with the GIPS rules, things can be left out. I think it would be a great practice to add your verifier to the process, whenever your ad references GIPS or includes performance figures. Oh, and in case you didn't know this, your website is an advertisement, too!
p.s., if you read the SEC document you'll see ZPR's former verifier referenced. I want to make it perfectly clear that there is no suggestion whatsoever that the verifier did anything wrong.
But, there are times when you they caught, and this can be a problem.
Last week, the U.S. Securities & Exchange Commission (SEC) filed an action against ZPR Investment Management, Inc., regarding their advertising practices. They specifically reference conflicts with the Global Investment Performance Standards' (GIPS(R) advertising guidelines.
While there is no requirement for a verifier to look at their client's advertising materials, most verifiers, like The Spaulding Group, I'm sure, will be happy to do so.
Most firms have their CCO (Chief Compliance Officer) review and approve advertisements. But unless he/she is familiar with the GIPS rules, things can be left out. I think it would be a great practice to add your verifier to the process, whenever your ad references GIPS or includes performance figures. Oh, and in case you didn't know this, your website is an advertisement, too!
p.s., if you read the SEC document you'll see ZPR's former verifier referenced. I want to make it perfectly clear that there is no suggestion whatsoever that the verifier did anything wrong.
Tuesday, April 9, 2013
What approach needs more data, holdings or transaction-based attribution?
Yesterday I successfully defended my dissertation proposal (hurrah!). The topic is holdings and transaction-based attribution, and, as often happens for the student, I found myself providing insights into a topic my committee members generally know little about. In theory, the student is to be the "expert" on the subject, given the amount of research that's expected. Fortunately, this is a topic I've spent a lot of time on over many years.
We briefly touched on the issue of data. I mentioned that a colleague told me that transaction-based attribution requires a lot more data, which opens it up to the risk of errors being introduced. Two quick responses arise:
1) Is it true that transaction-based attribution requires more data than holdings?
2) How dirty is the data?
The first point I'll discuss today; the second, tomorrow.
I'm of the belief (though I haven't confirmed this yet) that monthly transaction-based attribution is as accurate as daily. If this holds, then the only additional data that is needed are transaction details.
Firms that use the holdings-based method are increasingly moving to daily, in an attempt to reduce the residual that is common with this approach. This means that every day their entire portfolio must be revalued and repopulated. Talk about a lot of data!
My bet is that we'll find that transaction-based models, in actuality, require less data. We'll see! If you have empirical evidence you wish to share, please let me know!
We briefly touched on the issue of data. I mentioned that a colleague told me that transaction-based attribution requires a lot more data, which opens it up to the risk of errors being introduced. Two quick responses arise:
1) Is it true that transaction-based attribution requires more data than holdings?
2) How dirty is the data?
The first point I'll discuss today; the second, tomorrow.
I'm of the belief (though I haven't confirmed this yet) that monthly transaction-based attribution is as accurate as daily. If this holds, then the only additional data that is needed are transaction details.
Firms that use the holdings-based method are increasingly moving to daily, in an attempt to reduce the residual that is common with this approach. This means that every day their entire portfolio must be revalued and repopulated. Talk about a lot of data!
My bet is that we'll find that transaction-based models, in actuality, require less data. We'll see! If you have empirical evidence you wish to share, please let me know!
Thursday, April 4, 2013
PMAR Video Highlights
The Spaulding Group's annual Performance Measurement, Attribution & Risk (PMAR) conferences have become the performance conference to attend. Performance Measurement Professionals rave about its uniqueness, innovations, quality speakers and content, and awesomeness! Here's a video with some highlights we think you'll enjoy.
Wednesday, April 3, 2013
Reporting on consolidated accounts
This topic has come up a few times recently, so I thought it worth touching briefly.
The situation: a client's account holds multiple portfolios, which are handled by different managers. Some have time-weighted returns, while others have money-weighted returns. How should a consolidated return be calculated?
First, we need to know what the perspective is. If we want to know how the client is doing overall, then we'd want to:
If, however, we want to come up with a return that tells the client how his/her managers are doing overall, we're faced with a bit of a dilemma. There are at least three options:
The second approach makes some sense, except when we recall that for some portfolios, MWRR is the preferred (and in some cases, mandated) approach. To switch to TWRR would yield an incorrect value, and thus provide an overall return that is suspect (actually, invalid).
The third is my preferred approach. The challenge is that with some of the client's portfolios, valuations may occur infrequently; perhaps at most quarterly. The firm could asset-weight based on the shortest common period of valuation. Or, for periods when an asset isn't revalued, still calculate the return (which will probably be zero).
More needs to be said and done about this, and I'll pursue this further, starting with this month's newsletter.
The situation: a client's account holds multiple portfolios, which are handled by different managers. Some have time-weighted returns, while others have money-weighted returns. How should a consolidated return be calculated?
First, we need to know what the perspective is. If we want to know how the client is doing overall, then we'd want to:
- aggregate the holdings
- calculate a money-weighted return (e.g., internal rate of return)
If, however, we want to come up with a return that tells the client how his/her managers are doing overall, we're faced with a bit of a dilemma. There are at least three options:
- Aggregate the account and calculate a time-weighted return
- Calculate time-weighted returns for all portfolios and asset-weight these results
- Asset-weight the portfolios' returns, even when they're a mix of time- and money-weighted.
The second approach makes some sense, except when we recall that for some portfolios, MWRR is the preferred (and in some cases, mandated) approach. To switch to TWRR would yield an incorrect value, and thus provide an overall return that is suspect (actually, invalid).
The third is my preferred approach. The challenge is that with some of the client's portfolios, valuations may occur infrequently; perhaps at most quarterly. The firm could asset-weight based on the shortest common period of valuation. Or, for periods when an asset isn't revalued, still calculate the return (which will probably be zero).
More needs to be said and done about this, and I'll pursue this further, starting with this month's newsletter.
Subscribe to:
Posts (Atom)