At a recent presentation on the new version of the Global Investment Performance Standards (GIPS(R) 2010), I was asked if the asset-weighted standard deviation would be permitted for the new requirement to report a three year, annualized standard deviation. Once again we see some confusion as the asset-weighted form of the standard deviation only applies when measuring dispersion, not volatility (a single period vs. a time-series evaluation).
That being said, just why do firms calculate the asset-weighted standard deviation? Nowhere do we see this measure in the GIPS standards. Granted, it was part of and actually encouraged in the AIMR-PPS(R), but it somehow has dropped off the radar, to which I say "hallelujah." I was never a fan of this measure.
How do you interpret it? Standard deviation is easy: assuming a normal distribution, the average, plus and minus the standard deviation, represents approximately two-thirds of the total distribution. Easy. But what do we do with the asset-weighted variety?
It was encouraged for use because, after all, our composite return is an asset-weighted average, therefore we of course would want an asset-weighted measure for dispersion. And why again? If you can't interpret it, what is its value?
If you can provide some insights, please do. Otherwise, I would be happy to sign the petition to ban its use!
Tuesday, March 30, 2010
Monday, March 29, 2010
"Your results comply ..."
Just when you thought it was safe.
Years ago some folks thought that simply the use of certain software would automatically make them compliant with the AIMR-PPS(R) and/or GIPS(R) standards. But by now, we would think that this myth would have gone away. But it hasn't. In a recent online article by Financial Advisor Magazine titled "Trading Spaces," we learn of a nifty way for advisors (and others) to make money, simply by letting people know of their trades. This seems like an extension to the variation on wrap business, where advisors inform others of their strategies and reap a fee for doing so.
One firm, Covestor Investment Management," provides such a service and uses Advent Software's tools to track performance. The article tells us that "your results comply with global investment performance standards [sic] administered by the CFA Institute." We can perhaps excuse the lower cased standards title in the sentence, but should be concerned with the suggestion that by simply having results on a system they will comply with the standards: if life could only be so simple.
I have e-mailed the magazine's online editor to inform her of these errors and hope that corrections are issued. To those less sophisticated or knowledgeable about the standards, such a statement might be taken at face value, which can only result in problems.
Years ago some folks thought that simply the use of certain software would automatically make them compliant with the AIMR-PPS(R) and/or GIPS(R) standards. But by now, we would think that this myth would have gone away. But it hasn't. In a recent online article by Financial Advisor Magazine titled "Trading Spaces," we learn of a nifty way for advisors (and others) to make money, simply by letting people know of their trades. This seems like an extension to the variation on wrap business, where advisors inform others of their strategies and reap a fee for doing so.
One firm, Covestor Investment Management," provides such a service and uses Advent Software's tools to track performance. The article tells us that "your results comply with global investment performance standards [sic] administered by the CFA Institute." We can perhaps excuse the lower cased standards title in the sentence, but should be concerned with the suggestion that by simply having results on a system they will comply with the standards: if life could only be so simple.
I have e-mailed the magazine's online editor to inform her of these errors and hope that corrections are issued. To those less sophisticated or knowledgeable about the standards, such a statement might be taken at face value, which can only result in problems.
Hedging ... too often misunderstood
I believe that many misunderstand the role of hedging: some think it is the ideal approach when managing portfolios which are subject to the movement of various market segments, such as currency. Why wouldn't you want to hedge your global portfolio? Some think it guarantees that you'll lock in the local return, which is untrue (read Karnosky & Singer's monograph to learn more).
As John C. Hull wrote, "the purpose of a hedge is to reduce risk. A hedge tends to make unfavorable outcomes less unfavorable but also to make favorable outcomes less favorable." In his very well regarded Options, Futures, and Other Derivatives (7th edition), he provides an excellent example of how a hedge can work against you:
Hedging limits risk but it also limits profit.
When it comes to the impact of hedging, it's important that your performance attribution system be conscious of its contribution. I've already cited Karnosky & Singer, but would recommend Carl Bacon's book as an excellent source to fully comprehend this model. Carl presents it relative to the Brinson-Fachler model, while Karnosky & Singer use Brinson, Hood, Beebower. Both should be in your library. And, if derivatives are something you're involved with, then Hull's book belongs there, too!
As John C. Hull wrote, "the purpose of a hedge is to reduce risk. A hedge tends to make unfavorable outcomes less unfavorable but also to make favorable outcomes less favorable." In his very well regarded Options, Futures, and Other Derivatives (7th edition), he provides an excellent example of how a hedge can work against you:
- President: This is terrible. We've lost $10 million in the futures market in the space of three months. How could it happen? I want a full explanation.
- Treasurer: The purpose of the futures contracts was to hedge our exposure to the price of oil, not to make a profit...
- President: What's that got to do with it? ...
- Treasurer: If the price of oil had gone down...
- President: I don't care what would have happened if the price of oil had gone down. The fact is that it went up...
Hedging limits risk but it also limits profit.
When it comes to the impact of hedging, it's important that your performance attribution system be conscious of its contribution. I've already cited Karnosky & Singer, but would recommend Carl Bacon's book as an excellent source to fully comprehend this model. Carl presents it relative to the Brinson-Fachler model, while Karnosky & Singer use Brinson, Hood, Beebower. Both should be in your library. And, if derivatives are something you're involved with, then Hull's book belongs there, too!
Thursday, March 25, 2010
Living large
I was at the Advent Users' Group Conference in Chicago today, delivering a talk on the upcoming changes to the Global Investment Performance Standards (GIPS(R)). While there were several questions posed, a few dealt with the subject of "large cash flows." Coincidentally, while at the event I got a call from a verification client who was trying to figure out how to calculate returns when large flows occur. And, I got an e-mail from a software vendor client who asked about netting flows to decide on "large." A few quick points:
The final point has to do with netting flows: can this be done to determine "large." I don't see why it wouldn't work, provided you net absolute values and do it consistently. For example:
- What is "large"? That's up to the firm, though I would think 10% should be the max
- Can you have more than one definition? Yes, you can define "large" by asset class or composite.
- Do bond managers who value their portfolios infrequently have to abide by this rule? (funny: this question came up in November during the GIPS conference, too). Yes! Sorry :-(
The final point has to do with netting flows: can this be done to determine "large." I don't see why it wouldn't work, provided you net absolute values and do it consistently. For example:
- 12/31 BMV = $100,000
- 1/5 Cash flow = $25,000
- 1/25 Cash flow = -$25,000
- 1/31 EMV = $110,000
Tuesday, March 23, 2010
Materiality ... and what DOES it mean?
The term "material" or "materiality" appears several times within the Global Investment Performance Standards (GIPS(R)). Perhaps the most recent addition that requires attention is its use in the new Error Correction Guidance. It seems odd to me that so many firms don't have an error policy since errors occur quite a bit...who doesn't make a mistake? Without a policy, how do you decide or know what to do? Did we actually need to require firms to have a policy? Apparently we did, given the many cases where they don't exist.
What "is" materiality? From a definitional standpoint it occurred to me recently that it is a change that would cause someone to respond to something differently. President Obama won the U.S. Presidency with roughly 53% of the vote: what if an error was discovered and it turned out it was actually 54 percent, would that matter to most people? I think not. How about 55 percent? Probably not. But, as the number rises we reach a point where we're bordering on landslide territory, so there would clearly be a reaction. Well, what if it had actually been 52 percent? Chances are not much of a change in view; but, if it had been 51 percent, then it's close to 50/50, which would likely cause a response. And, if he had won but with less than 50% (i.e., a plurality but not a majority, as President Clinton won in '92 and '96), then no one could suggest that he "had a mandate."
We're of course dealing with materiality from a performance perspective, but should still take into consideration those differences where we believe someone would respond differently if they saw the revised number. While the guidance has three levels devoted to dealing with errors which are deemed "not material," there's one that deals with errors that are "material." And when we talk about defining materiality for your error policy, we're not really speaking about a definition, because there are plenty of sources for definitions, but rather the objective numerical indicator that alerts you that you have a material error and must take certain action. I suggest you don't use former U.S. Supreme Court Justice Potter Stewart's statement that he knew pornography when he sees it as your model (i.e., "I know materiality when I see it" doesn't cut it).
I've been trying on Linkedin to get a sense of where people stand with this topic and have gotten, as you'd expect, mixed results so far. I do believe that errors mean different things at different levels: that is, rather than an absolute level of materiality, a relative level is better. For example, while we might say that an error of 50 basis points (bps) is "material," is it always? If your return for 2009 was reported as 57.60% but it turned out it was 57.10 percent, do you think you'd get much of a reaction when you alert your clients of the error? But, if your return was reported as 1.60% but it was really 1.10 percent, I suspect that you'd have a better chance of getting a response.
Defining relative levels of materiality can be tricky, however. One of our clients defined it in two steps: first, the number (e.g., 50 bps) and then as a percent (e.g., 10%) of the originally reported return. For example, if the error is 50 bps or more and is > 10% of the original return. And so, in our earlier case, while we meet the 50 bps threshold it isn't > 10% of 57.60% (which, of course, is 5.76%). (Please note that this example is provided solely for illustrative purposes and isn't intended as an endorsement of a method.)
To me, relative definitions (again, from a numeric perspective) of materiality have much greater value than absolute. And while the GIPS Interpretations Subcommittee provided us with this "GS," it would be helpful if it included some examples. Perhaps over the next few weeks we'll get enough responses to the Linkedin poll to warrant a presentation here. If you haven't joined in, please do! You can provide your thoughts directly to me, if you wish, anonymously. Thanks.
What "is" materiality? From a definitional standpoint it occurred to me recently that it is a change that would cause someone to respond to something differently. President Obama won the U.S. Presidency with roughly 53% of the vote: what if an error was discovered and it turned out it was actually 54 percent, would that matter to most people? I think not. How about 55 percent? Probably not. But, as the number rises we reach a point where we're bordering on landslide territory, so there would clearly be a reaction. Well, what if it had actually been 52 percent? Chances are not much of a change in view; but, if it had been 51 percent, then it's close to 50/50, which would likely cause a response. And, if he had won but with less than 50% (i.e., a plurality but not a majority, as President Clinton won in '92 and '96), then no one could suggest that he "had a mandate."
We're of course dealing with materiality from a performance perspective, but should still take into consideration those differences where we believe someone would respond differently if they saw the revised number. While the guidance has three levels devoted to dealing with errors which are deemed "not material," there's one that deals with errors that are "material." And when we talk about defining materiality for your error policy, we're not really speaking about a definition, because there are plenty of sources for definitions, but rather the objective numerical indicator that alerts you that you have a material error and must take certain action. I suggest you don't use former U.S. Supreme Court Justice Potter Stewart's statement that he knew pornography when he sees it as your model (i.e., "I know materiality when I see it" doesn't cut it).
I've been trying on Linkedin to get a sense of where people stand with this topic and have gotten, as you'd expect, mixed results so far. I do believe that errors mean different things at different levels: that is, rather than an absolute level of materiality, a relative level is better. For example, while we might say that an error of 50 basis points (bps) is "material," is it always? If your return for 2009 was reported as 57.60% but it turned out it was 57.10 percent, do you think you'd get much of a reaction when you alert your clients of the error? But, if your return was reported as 1.60% but it was really 1.10 percent, I suspect that you'd have a better chance of getting a response.
Defining relative levels of materiality can be tricky, however. One of our clients defined it in two steps: first, the number (e.g., 50 bps) and then as a percent (e.g., 10%) of the originally reported return. For example, if the error is 50 bps or more and is > 10% of the original return. And so, in our earlier case, while we meet the 50 bps threshold it isn't > 10% of 57.60% (which, of course, is 5.76%). (Please note that this example is provided solely for illustrative purposes and isn't intended as an endorsement of a method.)
To me, relative definitions (again, from a numeric perspective) of materiality have much greater value than absolute. And while the GIPS Interpretations Subcommittee provided us with this "GS," it would be helpful if it included some examples. Perhaps over the next few weeks we'll get enough responses to the Linkedin poll to warrant a presentation here. If you haven't joined in, please do! You can provide your thoughts directly to me, if you wish, anonymously. Thanks.
Monday, March 22, 2010
Don't make it harder than it needs to be...
We're consulting with a non-software vendor client who is enhancing their performance measurement system, which includes transaction-based attribution. During a recent discussion they mentioned that they calculate attribution daily and then link these results. Seems like a lot of work to me. Plus, a great deal of processing time and cost associated with it.
Intuitively one would expect that if you did daily transaction-based attribution your results would be better than if you did monthly: you'd be wrong. Transaction-based attribution can be done quite effectively monthly: you should get essentially the same results, save for perhaps some rounding issues.
The story is different with holdings-based attribution: if you want to increase its accuracy, daily is ideal.
By using modified Dietz to capture the returns and weighting the cash flows (buys and sells) the same way for securities/sectors as you would at the portfolio level, you've got all you need to ensure accuracy.
To add credibility to my claim I contacted my friend Carl Bacon, chairman of Statpro, who concurred (always nice when we agree!). And so, don't make this stuff harder than it needs to be!
Intuitively one would expect that if you did daily transaction-based attribution your results would be better than if you did monthly: you'd be wrong. Transaction-based attribution can be done quite effectively monthly: you should get essentially the same results, save for perhaps some rounding issues.
The story is different with holdings-based attribution: if you want to increase its accuracy, daily is ideal.
By using modified Dietz to capture the returns and weighting the cash flows (buys and sells) the same way for securities/sectors as you would at the portfolio level, you've got all you need to ensure accuracy.
To add credibility to my claim I contacted my friend Carl Bacon, chairman of Statpro, who concurred (always nice when we agree!). And so, don't make this stuff harder than it needs to be!
Thursday, March 18, 2010
What a great question!
In today's webinar on GIPS 2010 one of the attendees (participants?) asked if for the new GIPS(R) 2010 (Global Investment Performance Standards) requirement to report 3-year annualized standard deviation, if an asset weighted version of standard deviation would work. Interesting, yes?
This is yet another example of the confusion which exists because standard deviation has been used for years to report the required measure of dispersion and an asset-weighted version can be used (although it isn't even shown in GIPS but was, at one time, recommended for the AIMR-PPS(R)).
One must separate the two ideas: risk vs. dispersion. Risk is across multiple time periods (e.g., the past 36 months) while dispersion is within a time period (e.g., for the year 2009). For risk there is no such thing as an asset-weighted version of standard deviation.
This is yet another example of the confusion which exists because standard deviation has been used for years to report the required measure of dispersion and an asset-weighted version can be used (although it isn't even shown in GIPS but was, at one time, recommended for the AIMR-PPS(R)).
One must separate the two ideas: risk vs. dispersion. Risk is across multiple time periods (e.g., the past 36 months) while dispersion is within a time period (e.g., for the year 2009). For risk there is no such thing as an asset-weighted version of standard deviation.
Tulipomania redux?
In our Fundamentals of Performance Measurement course I often mention the ailment known as "rootaphobia," or the "fear of roots" (as in square roots; the root symbol). I later mention "tulipomania." And while the former is contrived, the latter isn't as it refers to the bubble in tulip bulb prices that occurred in 17th century Holland, as wonderfully documented by Mike Dash in "Tulipomania: The Story of the World's Most Coveted Flower & the Extraordinary Passions it Aroused." (If you haven't read it, I recommend it as it entertaining, well written, and comes with much intrigue). That bubble, like all others, came crashing down.
In today's Wall Street Journal we learn of what might be an up-and-coming flower phenom, as many seek to obtain the various bulbs of the genus Galanthus, or more commonly known as "snowdrops." A single bulb of some varieties is priced at $50 and this article will no doubt spur more interest in this flower.
And so you might ask, "who cares." Well, if this becomes a bubble is it because of the financial quants who have developed fancy mathematical models to track the flower's prices? Most assuredly, no. And yet, we hear from folks like Pablo Triana and Nassim Taleb that the '87 crash as well as the most recent downturn's blame lies at the feet of these model developers. Sorry, but I don't buy it. And while I can't ignore the potential contribution some of the models may have had, to suggest that they deserve all or most of the credit is an act of hyperbole. While it would be helpful to be able to identify a single cause, without adequate proof one runs the risk of coming up short. I'm a fan of Michael Lewis' and in a recent book he acknowledged that the cause for the '87 "market adjustment" (aka "crash") is unknown.
There are academics who study market crashes and they can perhaps shed some light on the causes to the most recent one, though many of us in the industry can identify several candidates. The human element, as is quite visible in tulipomania and perhaps the figure snowdropsomania cannot be ignored.
In today's Wall Street Journal we learn of what might be an up-and-coming flower phenom, as many seek to obtain the various bulbs of the genus Galanthus, or more commonly known as "snowdrops." A single bulb of some varieties is priced at $50 and this article will no doubt spur more interest in this flower.
And so you might ask, "who cares." Well, if this becomes a bubble is it because of the financial quants who have developed fancy mathematical models to track the flower's prices? Most assuredly, no. And yet, we hear from folks like Pablo Triana and Nassim Taleb that the '87 crash as well as the most recent downturn's blame lies at the feet of these model developers. Sorry, but I don't buy it. And while I can't ignore the potential contribution some of the models may have had, to suggest that they deserve all or most of the credit is an act of hyperbole. While it would be helpful to be able to identify a single cause, without adequate proof one runs the risk of coming up short. I'm a fan of Michael Lewis' and in a recent book he acknowledged that the cause for the '87 "market adjustment" (aka "crash") is unknown.
There are academics who study market crashes and they can perhaps shed some light on the causes to the most recent one, though many of us in the industry can identify several candidates. The human element, as is quite visible in tulipomania and perhaps the figure snowdropsomania cannot be ignored.
Tuesday, March 16, 2010
Reputational risk
I just posted a dozen tips to choose a verifier and mentioned a couple times the issue of "reputational risk."
If you don't think avoiding this risk is important, I have two words for you: Tiger Woods.
The photo's announcement of Tiger's return to golf couldn't avoid including a reference to his extramarital affairs. An individual's as well as a firm's reputation should be guarded: you don't want to negatively impact yours!
Note: The photo is from Reuters and was on the Fox News website
If you don't think avoiding this risk is important, I have two words for you: Tiger Woods.
The photo's announcement of Tiger's return to golf couldn't avoid including a reference to his extramarital affairs. An individual's as well as a firm's reputation should be guarded: you don't want to negatively impact yours!
Note: The photo is from Reuters and was on the Fox News website
A dozen tips to picking a GIPS verifier
How does one go about choosing their GIPS(R) (Global Investment Performance Standards) verifier? There is little guidance provided and it's very easy to use criteria which is less than ideal. I've compiled this list and hope you find it of value (as always, your thoughts are welcome):
Feel free to chime in with your thoughts and ideas on this list.
- Don't choose a verifier just because you already have a relationship with them. Occasionally we hear something like "oh, we picked XYZ Accounting Services," even though XYZ has never conducted a verification before. Yes, they're good auditors, but GIPS is a whole different animal and it requires a very different skill and knowledge set. Chances are, you'll be teaching the verifier all about the standards, meaning that you'll be teaching them what you understand, which may not be totally correct.
- Ask the verifier how they keep up with the standards. Our firm, like several other verifiers, regularly attends and sponsors the CFA Institute's annual GIPS conferences. This affords us the chance to not only hear from speakers but also interact with others. This helps improve our knowledge. In addition, we're tuned in on what's occurring and participate in the opportunities to respond to changes to the standards. Is the firm you're considering truly engaged in this segment of the industry?
- How does the verifier train their staff? The CFA Institute provides periodic training courses on GIPS (we've been teaching these classes since their inception, some 10+ years ago). While we see that some firms send their new hires here, many don't. Perhaps they offer their own training, which is fine, but you will want to know how they ensure that their staff is aware of the key aspects of the standards.
- What level of experience are the individuals who will conduct the verification? If you're given new hires, chances are you'll be training them (see #1 above). And, chances are they won't be able to answer many of your questions. Ideally you want at least one senior, experienced individual who is engaged full time on the assignment, who will manage the project and be able to respond to your questions.
- Speaking of questions, will the verifier answer your questions? We respond quickly to client (and even non-client) inquiries. If we're unsure about the answer, we'll let them know our initial thoughts and tell them we will do additional checking before finalizing the answer.
- What kind of turnover can you expect in the verification staff? Ideally turnover should be kept to a minimum; hopefully you'll experience zero turnover.
- Will the verification be done at your site or remotely? We favor being on site, as we don't feel that we could do an adequate job from our offices. Most verifiers appear to feel the same.
- How frequently will the work be done? Most verifiers, like us, recommend annual verifications. Trust me, it's not that we don't like visiting our clients, but quarterly is, in our opinion, too disruptive for them (and our clients seem to agree). But, if a client wants us in more often, we'll be happy to oblige.
- How does the verifier keep their clients apprised of what's going on with the standards? Some firms, like us, provide periodic newsletters, host webinars, or provide letters detailing changes and other important information about the standards. While it's ultimately the client's responsibility to know what's expected, it's important to have a resource who will inform and provide counsel with the changes as well as other aspects of the standards.
- Is the verifier truly independent? The independence guidance statement places the responsibility to assure independence on the shoulders of both the verifier and the client.
- What do the verification firm's clients say? It's important that you conduct due diligence when selecting a verifier. Ideally, speak with clients who have experience and knowledge about other verification firms, so you get a broad perspective.
- Is the verifier easy or are they thorough in their analysis? While there might be some appeal to getting a "rubber stamped" verification, one that doesn't require a lot of effort, you have to realize that this may put you at risk should the regulators come in. In addition, you might be exposed to reputational risk if your verifier is known for doing shoddy work. Yes, we all liked the professors who were easy graders, but we also realize we probably didn't get much out of their classes or for that matter our money's worth; the same holds true with verifications.
Feel free to chime in with your thoughts and ideas on this list.
Monday, March 15, 2010
Risk management in name only?
We spend a great deal of time debating the value of various risk measures, arguing, for example, whether it's appropriate for the Global Investment Performance Standards (GIPS(R)) to require the disclosure of the 3-year annualized standard deviation or whether value at risk has any value. But perhaps more time needs to be spent on the management of risk, as this seems to be what has often led to the crises we've witnessed. In his exceptional treatise, When Genius Failed, on Long-Term Capital Management, Roger Lowenstein discussed how LTCM regularly reviewed their risks but never put the brakes on any of their investing, in spite of the apparent risks they were facing. Seeing the risks, being aware of the risks, but not doing anything about them, shows an institution that is void of risk management.
In a more recent book, The Quants by Scott Patterson, we read that "risk management is about avoiding the mistake of betting so much you can lose it all." Patterson further states that this was "the mistake made by nearly every bank and hedge fund that ran into trouble in 2007 and 2008."
In this past weekend's Financial Times we are presented with a rather abridged version of the recently published 2,200 page exposé on Lehman's actions, that highlights quite a lot. For example, that the firm's risk officer "resisted an increase in the limit [of risk] from $2.3bn to $3.3bn but was overruled." and that "by the end of 2007, it was $4bn." Further, that certain assets, such as a "$2.3bn bridge loan...was never included in the risk usage calculation, although that single transaction [for example] would have put Lehman over its already enlarged risk limit." What exactly was the role of their risk officer? Patterson's claim that "the banks and hedge funds blowing up didn't know how to manage risk" seems, at least in Lehman's case, to be accurate.
The use of derivatives, short sales, and complex models are often cited as contributors to the market disaster that we're slowly making our way out of. However, risk management needs to be fully assessed as it appears that its absence from many of the trading rooms and investment houses surely was a huge factor. Henry Paulson was no doubt unaware of Lehman's risk management issues when he penned On the Brink, though he does speak in a rather disparaging way the (broad) management of AIG, and so not only risk management but management in general needs to be reviewed by those charged with providing oversight.
In a more recent book, The Quants by Scott Patterson, we read that "risk management is about avoiding the mistake of betting so much you can lose it all." Patterson further states that this was "the mistake made by nearly every bank and hedge fund that ran into trouble in 2007 and 2008."
In this past weekend's Financial Times we are presented with a rather abridged version of the recently published 2,200 page exposé on Lehman's actions, that highlights quite a lot. For example, that the firm's risk officer "resisted an increase in the limit [of risk] from $2.3bn to $3.3bn but was overruled." and that "by the end of 2007, it was $4bn." Further, that certain assets, such as a "$2.3bn bridge loan...was never included in the risk usage calculation, although that single transaction [for example] would have put Lehman over its already enlarged risk limit." What exactly was the role of their risk officer? Patterson's claim that "the banks and hedge funds blowing up didn't know how to manage risk" seems, at least in Lehman's case, to be accurate.
The use of derivatives, short sales, and complex models are often cited as contributors to the market disaster that we're slowly making our way out of. However, risk management needs to be fully assessed as it appears that its absence from many of the trading rooms and investment houses surely was a huge factor. Henry Paulson was no doubt unaware of Lehman's risk management issues when he penned On the Brink, though he does speak in a rather disparaging way the (broad) management of AIG, and so not only risk management but management in general needs to be reviewed by those charged with providing oversight.
Friday, March 12, 2010
Lehman's challenges and questionable actions
The Wall Street Journal reported today that a very detailed and costly to produce (> $30 million) report on Lehman has just been published. Is this a "must read"? Perhaps not, especially given its length (2,200 pages, with the table of contents 45 pages long) but it supposedly "reads like a best seller."
While the print version of the paper is always great to have, the online version of WSJ often provides a bit more, like a link to the report! While we may want to wait for the condensed version, this may prove to be an interesting "summer read."
I'm listening to Henry M. Paulson's On the Brink, which is quite good. It provides some insights into what went on at Lehman, but obviously not to the extent that we will find in this monster of a report. No doubt, those of us who had dealings with Lehman or simply were aware of their existence will find some interest in the intrigue that led to their demise.
p.s., if you enjoy reading but don't always have the time, I recommend listening! While I do a lot of actual reading, I also can go through a book within a week or two simply by listening while I drive (what sales guru Zig Ziglar refers to as "automobile university"). A client turned me on to Audible.com, where you can order unabridged versions of books at half or less than what they sell for at the major booksellers. Something to consider!
While the print version of the paper is always great to have, the online version of WSJ often provides a bit more, like a link to the report! While we may want to wait for the condensed version, this may prove to be an interesting "summer read."
I'm listening to Henry M. Paulson's On the Brink, which is quite good. It provides some insights into what went on at Lehman, but obviously not to the extent that we will find in this monster of a report. No doubt, those of us who had dealings with Lehman or simply were aware of their existence will find some interest in the intrigue that led to their demise.
p.s., if you enjoy reading but don't always have the time, I recommend listening! While I do a lot of actual reading, I also can go through a book within a week or two simply by listening while I drive (what sales guru Zig Ziglar refers to as "automobile university"). A client turned me on to Audible.com, where you can order unabridged versions of books at half or less than what they sell for at the major booksellers. Something to consider!
Wednesday, March 10, 2010
"The requirement to revalue for large flows has been dropped"...NOT!
I was at a client yesterday conducting a verification, and a colleague, who is quite well versed on the Global Investment Performance Standards (GIPS(R)), pointed out that the planned (1 January 2010) requirement to revalue portfolios for large external cash flows had been dropped from the final version of GIPS 2010. Well, I was a tad confused, because I would have thought that I would have heard of such a major change. And so, I went looking.
I believe my colleague focused on section 2, which deals with calculations. In the 2005 edition we find the following:
And so, HAS the requirement to revalue for large flows been dropped? Well, no, it hasn't. The wording just shifted a bit. On the same page in the 2010 version we find:
But NO, the requirement was NOT dropped!
I believe my colleague focused on section 2, which deals with calculations. In the 2005 edition we find the following:
- 2.A.2.b. For periods beginning 1 January 2010, firms must value portfolios on the date of all large external cash flows.
- 2.A.2.b. For periods beginning on or after 1 January 2005, firms must calculate portfolio returns that adjust for daily weighted external cash flows.
And so, HAS the requirement to revalue for large flows been dropped? Well, no, it hasn't. The wording just shifted a bit. On the same page in the 2010 version we find:
- 1.A.3 Firms must value portfolios in accordance with the composite-specific valuation policy. Portfolios must be valued:
- b. For periods beginning on or after 1 January 2010, on the date of all large cash flows. Firms must define large cash flow for each composite to determine when portfolios in that composite must be valued.
But NO, the requirement was NOT dropped!
Tuesday, March 9, 2010
More than 1,000 visitors so far!
We're pleased to announce that more than 1,000 "unique visitors" have visited this blog! May not sound like a lot, when we consider some sites that have tens of thousands of visitors, but our industry isn't that big, so to get more than 1,000 in less than 10 months is, I think, pretty good! We're quite global as well, as we've had visitors from almost 50 countries! In addition to the U.S. we have:
So, thank you for visiting!
When more is too much
As a GIPS(R) (Global Investment Performance Standards) verifier, we get to see lots of presentations. Some firms are "lean and mean," and only show what's required; others toss in all kinds of verbiage which isn't necessary.
Example: including how you calculate your returns. Why? Does your prospect really want to know this? If they do, they'll ask for it (recall that you have to let them know that they have that right). So why take up space with this information?
Example: telling us what you don't do. For example, "we don't have carve outs," or "we don't invest in non-US securities, so aren't subject to withholding taxes." Negative statements aren't required.
One advantage to tossing everything in is that individuals are less likely to read the presentation (who, for example, ever reads a mutual fund prospectus in its entirety?). But, this isn't what you're supposed to do, right? You should want your prospects to read what you give them, so rather than confuse with lots of details which aren't needed, I say provide the minimum information that truly has value to the recipients.
Example: including how you calculate your returns. Why? Does your prospect really want to know this? If they do, they'll ask for it (recall that you have to let them know that they have that right). So why take up space with this information?
Example: telling us what you don't do. For example, "we don't have carve outs," or "we don't invest in non-US securities, so aren't subject to withholding taxes." Negative statements aren't required.
One advantage to tossing everything in is that individuals are less likely to read the presentation (who, for example, ever reads a mutual fund prospectus in its entirety?). But, this isn't what you're supposed to do, right? You should want your prospects to read what you give them, so rather than confuse with lots of details which aren't needed, I say provide the minimum information that truly has value to the recipients.
Monday, March 8, 2010
Heavy emphasis on risk?
Money Management Letter offered the following regarding the revised Global Investment Performance Standards (GIPS(R)) which will go into effect next January:
"The CFA Institute has placed a heavy emphasis on risk disclosure in its revised Global Investment Performance Standards, which it announced late last month. For the first time, it is requiring that firms seeking to comply with the institute’s standards give investors a standard of comparison of risk in investment strategies."
First, I wouldn't say that it's the "CFA Institute" that is placing the emphasis, it's the GIPS Executive Committee. Granted, the CFA Institute technically owns the trademark for GIPS, but to attribute these requirements to them is, I bit, inaccurate. Secondly, "heavy emphasis"? Requiring a 3-year annualized standard deviation constitutes "heavy emphasis"? There are countless individuals who would argue that standard deviation isn't even a risk measure.
The standards should require the disclosure of risk and standard deviation is arguably the most frequently used measure of risk, in spite of its detractors. But, it's also a poor measure of risk from many perspectives. Perhaps a better requirement would be for firms to provide a measure of risk but a measure of their choosing. And while some might say "then how can you compare managers, when one shows tracking error and another beta?" we could respond "then ask the managers to show additional measures!" Surely they would be willing to do this. Okay, so perhaps this isn't the best idea, but is there a "best" idea? Tough subject, no doubt. But, I still stand with my earlier statement that we aren't seeing a "heavy" emphasis on risk. I guess hyperbole still be used to get your attention, though: it got mine.
"The CFA Institute has placed a heavy emphasis on risk disclosure in its revised Global Investment Performance Standards, which it announced late last month. For the first time, it is requiring that firms seeking to comply with the institute’s standards give investors a standard of comparison of risk in investment strategies."
First, I wouldn't say that it's the "CFA Institute" that is placing the emphasis, it's the GIPS Executive Committee. Granted, the CFA Institute technically owns the trademark for GIPS, but to attribute these requirements to them is, I bit, inaccurate. Secondly, "heavy emphasis"? Requiring a 3-year annualized standard deviation constitutes "heavy emphasis"? There are countless individuals who would argue that standard deviation isn't even a risk measure.
The standards should require the disclosure of risk and standard deviation is arguably the most frequently used measure of risk, in spite of its detractors. But, it's also a poor measure of risk from many perspectives. Perhaps a better requirement would be for firms to provide a measure of risk but a measure of their choosing. And while some might say "then how can you compare managers, when one shows tracking error and another beta?" we could respond "then ask the managers to show additional measures!" Surely they would be willing to do this. Okay, so perhaps this isn't the best idea, but is there a "best" idea? Tough subject, no doubt. But, I still stand with my earlier statement that we aren't seeing a "heavy" emphasis on risk. I guess hyperbole still be used to get your attention, though: it got mine.
Thursday, March 4, 2010
Attribution and GIPS
I just got off the phone with a verification client who wanted to discuss attribution and the requirements of the Global Investment Performance Standards (GIPS(R)). Many firms want to include attribution in their marketing materials, and this is great! It provides the recipient with an idea as to the source(s) of the firm's excess return and (hopefully) validates their claims.
There are no rules when it comes to GIPS and attribution, other than that attribution should be labeled as "supplemental information." You have two choices:
We were asked to provide some ideas on disclosure language:
There are no rules when it comes to GIPS and attribution, other than that attribution should be labeled as "supplemental information." You have two choices:
- show attribution of the full composite, where you treat the composite as a single portfolio
- show attribution for a "representative portfolio" within the composite.
- Are they truly representative?
- What happens when they leave? You'll have to find another portfolio to use and link the results.
We were asked to provide some ideas on disclosure language:
- Supplemental Information: the attribution results are of a representative portfolio which is a member of the composite. We believe these effects are representative of the composite, in general, as well as the other accounts within it.
- Supplemental Information: these results are of a representative portfolio which is in this composite. As with all representative portfolios there's a risk that the one selected presents the results the manager wishes to display. All portfolios within this composite are managed in a very similar manner, and therefore we believe these results truly represent the composite and all the accounts within it.
- Supplemental Information: these results are of a representative portfolio which is in this composite. As with all representative portfolios there's a risk that the one selected presents the results the manager wishes to display. To avoid this we chose the portfolio with the longest history. All portfolios within this composite are managed in a very similar manner, and therefore we believe these results truly represent the composite and all the accounts within it
- Supplemental Information: these results are of a representative portfolio which is in this composite. As with all representative portfolios there's a risk that the one selected presents the results the manager wishes to display. To avoid this we selected the portfolio at random. All portfolios within this composite are managed in a very similar manner, and therefore we believe these results truly represent the composite and all the accounts within it
Wednesday, March 3, 2010
How come the math doesn't work out?
We recently received an inquiry from a client regarding netting of advisory fees:
If an annual fee schedule is 60bps after you link the returns for 1 year it looks like returns were reduced by 88bps. I know is because we are linking and not adding or subtracting but for some reason is not making any sense to others.
GOF Return for 1 year ending 01/31/2010= 67.52%
NOF return for 1 year ending 01/31/2010= 66.64%
Annual fee schedule: 60bps
This doesn't make sense, right? And so, let's walk through the math. First, I want to find the monthly equivalent of the annual GOF return: I simply add one to the annual return (1.6752) and raise it to the 1/12 power, and then subtract one. My answer: 4.39% (there are some extra decimal places which you'll no doubt retain in your spreadsheet). I geometrically link these and tie to the 67.52% annual return as provided.
Next, I divide the annual fee (0.60%) by 12 (0.05%) and enter this for each month; I geometrically link these and the result is my 60 bps return. (Actually, there are some trailing decimal places: 6016527...). It is important to point out that our arithmetically derived monthly return won't geometrically link to the starting annual return; they're only this close when the numbers are quite small, as in this case. We'll discuss this further.
For each month, I derive the net-of-fee return by simply subtracting the 5 bp fee from the monthly gross-of-fee return: 4.34 percent. I then geometrically link these and obtain an annual return of 66.56%; note that this doesn't match the NOF return our client has, but this can be for a variety of reasons, one being that their returns varied from month to month, while I kept them equal for simplicity purposes. Our results:
(NOTE: click on the above figure to see the entire spreadsheet)
We expect that our GOF minus NOF annual returns should equal our annual fee of 0.60%; however, it doesn't! It's 0.96%. How come?
It has to do with compounding. Think of this problem as if we were dealing with excess returns, where our fee is the benchmark and the NOF return represents our excess return. We know, from numerous articles, that arithmetically derived excess returns don't link: this is why we have such tools as those developed by Menchero, Carino, and Frongello for multi-period attribution.
Our monthly net-of-fee returns will not geometrically link at the same rate as our gross-of-fee returns, because there's a size difference: compounding builds upon prior periods, and the larger the prior period's value, the greater the rate of compounding. I hope this explanation makes sense. The results are correct; they just don't tie out as we'd like them to or believe they should.
If an annual fee schedule is 60bps after you link the returns for 1 year it looks like returns were reduced by 88bps. I know is because we are linking and not adding or subtracting but for some reason is not making any sense to others.
GOF Return for 1 year ending 01/31/2010= 67.52%
NOF return for 1 year ending 01/31/2010= 66.64%
Annual fee schedule: 60bps
This doesn't make sense, right? And so, let's walk through the math. First, I want to find the monthly equivalent of the annual GOF return: I simply add one to the annual return (1.6752) and raise it to the 1/12 power, and then subtract one. My answer: 4.39% (there are some extra decimal places which you'll no doubt retain in your spreadsheet). I geometrically link these and tie to the 67.52% annual return as provided.
Next, I divide the annual fee (0.60%) by 12 (0.05%) and enter this for each month; I geometrically link these and the result is my 60 bps return. (Actually, there are some trailing decimal places: 6016527...). It is important to point out that our arithmetically derived monthly return won't geometrically link to the starting annual return; they're only this close when the numbers are quite small, as in this case. We'll discuss this further.
For each month, I derive the net-of-fee return by simply subtracting the 5 bp fee from the monthly gross-of-fee return: 4.34 percent. I then geometrically link these and obtain an annual return of 66.56%; note that this doesn't match the NOF return our client has, but this can be for a variety of reasons, one being that their returns varied from month to month, while I kept them equal for simplicity purposes. Our results:
(NOTE: click on the above figure to see the entire spreadsheet)
We expect that our GOF minus NOF annual returns should equal our annual fee of 0.60%; however, it doesn't! It's 0.96%. How come?
It has to do with compounding. Think of this problem as if we were dealing with excess returns, where our fee is the benchmark and the NOF return represents our excess return. We know, from numerous articles, that arithmetically derived excess returns don't link: this is why we have such tools as those developed by Menchero, Carino, and Frongello for multi-period attribution.
Our monthly net-of-fee returns will not geometrically link at the same rate as our gross-of-fee returns, because there's a size difference: compounding builds upon prior periods, and the larger the prior period's value, the greater the rate of compounding. I hope this explanation makes sense. The results are correct; they just don't tie out as we'd like them to or believe they should.
Shorten your URLs
She's done it again! I enjoy reading Susan Weiner's blog because (a) I like to write and (b) she introduces some neat ideas. In yesterday's post she introduces us to a way to shorten our URLs, through a service called TinyURL.com. I just tried it and it works quite well. If you have one of those massively long URLs which you're trying to fit neatly into Linkedin, a letter, or some other communication, take advantage of this free service.
Tuesday, March 2, 2010
Client reporting ... getting it right
I got a call earlier today from a California-based investment consultant who is wrestling with a problem and wanted to know if there were any rules available. Here's the scenario:
One of their client's managers subadvises the actual investing for a particular strategy. Two different returns are produced. One might be called the manager's "marketed" return; this is the return that represents the manager's overall historical performance, through subadvisors. The manager has the right to hire and fire managers, and is entitled (and more correctly, obligated) to show this return to prospects. Presumably this return is derived in accordance with the Global Investment Performance Standards (GIPS(R)) and is the corresponding return for the composite this client is in. The second is the client's actual return. And so, which should this client be seeing?
As I've mentioned in the past, when faced with this kind of situation you should ask "what question are you trying to get answered?" If the client is interested in knowing how the manager performs in general for this strategy, then clearly the "marketed" return is what they should be seeing. However, if the client wants to know "how has this manager done for me?," then they want to see their actual return. Apparently the client's reporting agency is providing the "marketed return," but this consultant wants the client to see their actual return.
To me, this is not unlike the world of wrap fee or SMA accounts, where SMA (separately managed account) sponsors typically use a "marketed return" to peddle the various strategies they offer. Here, the sponsor serves as a subadvisor, identifying managers to invest in the different strategies. When it comes to reporting to their clients, however, they show them their individual actual return.
Back to the consultant's question: are there any rules? Well, no there aren't, and I wouldn't expect to see any. But, I think the answer is pretty clear. And if the reporting agency can't get the reporting right, perhaps it's time to look for a new one!
One of their client's managers subadvises the actual investing for a particular strategy. Two different returns are produced. One might be called the manager's "marketed" return; this is the return that represents the manager's overall historical performance, through subadvisors. The manager has the right to hire and fire managers, and is entitled (and more correctly, obligated) to show this return to prospects. Presumably this return is derived in accordance with the Global Investment Performance Standards (GIPS(R)) and is the corresponding return for the composite this client is in. The second is the client's actual return. And so, which should this client be seeing?
As I've mentioned in the past, when faced with this kind of situation you should ask "what question are you trying to get answered?" If the client is interested in knowing how the manager performs in general for this strategy, then clearly the "marketed" return is what they should be seeing. However, if the client wants to know "how has this manager done for me?," then they want to see their actual return. Apparently the client's reporting agency is providing the "marketed return," but this consultant wants the client to see their actual return.
To me, this is not unlike the world of wrap fee or SMA accounts, where SMA (separately managed account) sponsors typically use a "marketed return" to peddle the various strategies they offer. Here, the sponsor serves as a subadvisor, identifying managers to invest in the different strategies. When it comes to reporting to their clients, however, they show them their individual actual return.
Back to the consultant's question: are there any rules? Well, no there aren't, and I wouldn't expect to see any. But, I think the answer is pretty clear. And if the reporting agency can't get the reporting right, perhaps it's time to look for a new one!
Consistency ... important characteristic
My friend, Carl Bacon, and I have discussed the topic of what side of the road one should drive on; if you've witnessed Carl and me together, you're not surprised to hear we disagree on this topic. The Brits, as you no doubt are aware, drive on the left side while Americans (and just about everyone else in the universe) drive on the right, which arguably IS the right side. Okay, so we agree that we disagree on this.
Americans are consistent about this notion of being on the right side in everything we do: walking down the street, we walk on the right; riding a bike, we ride on the right; pass someone while riding a horse, you pass on the right; going up a flight of stairs, we walk on the right. What about in the UK? Well, it doesn't take long to realize that they're a bit agnostic about this, and while folks generally walk on the right side of the sidewalk (not the left as you might expect), you'll see escalators which are sometimes going up on the left side while at other times they're on the right (just visit some of the underground stations to witness this inconsistency).
And so, what the heck does this have to do with performance? We need to be consistent. When you employ policies, you have to be consistent.
When we conduct GIPS(R) (Global Investment Performance Standards) verifications, we look for consistency. Firms get to make many of their own rules up; as long as they're reasonable, fine! But, they have to be consistent in executing them. Otherwise, they've got problems.
Americans are consistent about this notion of being on the right side in everything we do: walking down the street, we walk on the right; riding a bike, we ride on the right; pass someone while riding a horse, you pass on the right; going up a flight of stairs, we walk on the right. What about in the UK? Well, it doesn't take long to realize that they're a bit agnostic about this, and while folks generally walk on the right side of the sidewalk (not the left as you might expect), you'll see escalators which are sometimes going up on the left side while at other times they're on the right (just visit some of the underground stations to witness this inconsistency).
And so, what the heck does this have to do with performance? We need to be consistent. When you employ policies, you have to be consistent.
When we conduct GIPS(R) (Global Investment Performance Standards) verifications, we look for consistency. Firms get to make many of their own rules up; as long as they're reasonable, fine! But, they have to be consistent in executing them. Otherwise, they've got problems.
Monday, March 1, 2010
It's in the press release!
In our February newsletter I mentioned that the new edition of the Global Investment Performance Standards (GIPS(R)) (i.e., GIPS 2010) doesn't reference "early adoption" as the '05 version does. What I failed to do was to indicate that this wording is in the press release that announces the new version. There (if we take the time to read it, which clearly I didn't) we find: "Firms that claim compliance with the GIPS standards have until 1 January 2011 to adhere to the new requirements, and early adoption is recommended." (emphasis added)
I apologize to our readers and the GIPS Executive Committee (EC) for not taking the time to read the press release in its entirety. I will make note of this in our March issue, as well.
I apologize to our readers and the GIPS Executive Committee (EC) for not taking the time to read the press release in its entirety. I will make note of this in our March issue, as well.
Subscribe to:
Posts (Atom)