Calculating rates of return really isn't that difficult. And while we may debate whether returns should be calculated using money-weighting or time-weighting, such issues pale in comparison with the broader aspect of ethics.
I must confess that I initially thought that the Certificate in Investment Performance Measurement (CIPM) Program's emphasis on ethics seemed a bit excessive. And perhaps the exam questions could be geared more to the issues and situations that performance analysts and performance heads are more likely to encounter. But the issue of ethics shouldn't be ignored. We seem to be reminded of this on a fairly regular basis.
Yesterday's Wall Street Journal has an article on Raj Rajaratnam and his Galleon hedge fund. Recall that Mr. Rajaratnam has been accused of trading on inside information. His prowess at building relationships, which allegedly resulted in him gaining access to confidential information, allowed him to build a successful business and amass a fortune of more than a billion dollars.
Galleon's returns were apparently quite impressive. And one might not find any fault with the accuracy of the valuations or the return methodology employed. But what is the value of the returns if, as has been suggested, they reflect the results of illegal activity?
What is the responsibility of the performance analyst or manager, should they have reason to believe that the returns reflect illicit activity? Perhaps questions like these should be added to the CIPM program. What would YOU do, if you had reason to believe that the firm's results were arrived at through illegal means? Something to ponder, yes?
p.s.,We are investigating another fraud case which might involve a firm that claimed compliance with the Global Investment Performance Standards (GIPS(R)). We will provide details once we've done further vetting of the information we've seen so far.
Wednesday, December 30, 2009
Tuesday, December 29, 2009
Active vs. Passive Voice
When commenting on someone's writing I often recommend using the active voice. A recent example may help.
A client sent us their GIPS(R) Policies & Procedures document to review. It included the following: "there are few industry-accepted standards for calculation and presentation of after-tax returns"; clearly a statement in the passive voice. I suggested changing this to the active voice; for example, "there are few industry-accepted standards to calculate and present after-tax returns." It's shorter, more direct, and arguably reads better.
I, like most people, used to write almost exclusively in the passive voice, but eventually learned the difference and try to use active as often as possible. It's fine to include some passive voice, but most of the writing should be in the active mode. When I review client documents, such as their firm's policies to support their compliance with the Global Investment Performance Standards, this is one of the items I'll comment on. Clearly, it's up to the client to switch, but most recognize the benefits and seem to appreciate the suggestions.
In a piece for today, Susan Weiner commented at length on this topic, and provides additional resources to support one's transition to the active voice. I suggest you check it out!
A client sent us their GIPS(R) Policies & Procedures document to review. It included the following: "there are few industry-accepted standards for calculation and presentation of after-tax returns"; clearly a statement in the passive voice. I suggested changing this to the active voice; for example, "there are few industry-accepted standards to calculate and present after-tax returns." It's shorter, more direct, and arguably reads better.
I, like most people, used to write almost exclusively in the passive voice, but eventually learned the difference and try to use active as often as possible. It's fine to include some passive voice, but most of the writing should be in the active mode. When I review client documents, such as their firm's policies to support their compliance with the Global Investment Performance Standards, this is one of the items I'll comment on. Clearly, it's up to the client to switch, but most recognize the benefits and seem to appreciate the suggestions.
In a piece for today, Susan Weiner commented at length on this topic, and provides additional resources to support one's transition to the active voice. I suggest you check it out!
Good news for 2010, as predicted by the yield curve
In a recent blog piece, Larry Kudlow discussed how the yield curve is portending a positive economic environment for 2010; here's part of what he wrote:
"The Yield Curve is Signalling Bigger Growth
"What’s a yield curve and why is it so important?
"Well, the curve itself measures Treasury interest rates, by maturity, from 91-day T-bills all the way out to 30-year bonds. It’s the difference between the long rates and the short rates that tells a key story about the future of the economy.
"When the curve is wide and upward sloping, as it is today, it tells us that the economic future is good. When the curve is upside down, or inverted, with short rates above long rates, it tells us that something is amiss -- such as a credit crunch and a recession.
"The inverted curve is abnormal, the positive curve is normal. We have returned to normalcy, and then some. Right now, the difference between long and short Treasury rates is as wide as any time in history. With the Fed pumping in all that money and anchoring the short rate at zero, investors are now charging the Treasury a higher interest rate for buying its bonds. That’s as it should be. The time preference of money simply means that the investor will hold Treasury bonds for a longer period of time, but he or she is going to charge a higher rate. That is a normal risk profile.
"The yield curve may be the best single forecasting predictor there is. When it was inverted or flat for most of 2006, 2007, and the early part of 2008, it correctly predicted big trouble ahead. Right now it is forecasting a much stronger economy in 2010 than most people think possible."
Good news for investors and everyone else, too!
"The Yield Curve is Signalling Bigger Growth
"What’s a yield curve and why is it so important?
"Well, the curve itself measures Treasury interest rates, by maturity, from 91-day T-bills all the way out to 30-year bonds. It’s the difference between the long rates and the short rates that tells a key story about the future of the economy.
"When the curve is wide and upward sloping, as it is today, it tells us that the economic future is good. When the curve is upside down, or inverted, with short rates above long rates, it tells us that something is amiss -- such as a credit crunch and a recession.
"The inverted curve is abnormal, the positive curve is normal. We have returned to normalcy, and then some. Right now, the difference between long and short Treasury rates is as wide as any time in history. With the Fed pumping in all that money and anchoring the short rate at zero, investors are now charging the Treasury a higher interest rate for buying its bonds. That’s as it should be. The time preference of money simply means that the investor will hold Treasury bonds for a longer period of time, but he or she is going to charge a higher rate. That is a normal risk profile.
"The yield curve may be the best single forecasting predictor there is. When it was inverted or flat for most of 2006, 2007, and the early part of 2008, it correctly predicted big trouble ahead. Right now it is forecasting a much stronger economy in 2010 than most people think possible."
Good news for investors and everyone else, too!
Monday, December 28, 2009
Standard Deviation ... a risk measure or not?
Standard deviation is a much misunderstood measure, in spite of its common use.
First, is it a risk measure? It depends on who you ask. It's evident that Nobel Laureate Bill Sharpe considers it to be one, since it serves this purpose in his eponymous risk-adjusted measure. Our firm's research has shown that it is the most commonly used risk measure.
And yet, there are many who claim that it does anything but measure risk. What's your definition of risk? If it's the inability to meet a client's objectives, how can standard deviation do this? But, for decades individuals have looked at risk simply as volatility.
As to volatility, is it a measure of volatility or variability? In an e-mail response to this writer, Bill Sharpe said that the two terms can be used in an equivalent manner.
The GIPS(R) (Global Investment Performance Standards) 2010 exposure draft includes a proposed requirement for compliant firms to report the three year annualized standard deviation, which appears to have survived the public's criticism and will be part of the rules, effective 1 January 2011. But, will it be called a "risk measure"? This remains unclear.
Interpreting standard deviation is a challenge, since the result's value will vary based on the return around which it's being measured. Example: your standard deviation is 1 percent; is this good or bad? If your average return is 20%, then to know that roughly two-thirds of the distribution falls within plus-or-minus 1% doesn't seem bad at all, but if your average return is 0.50%, then doesn't 1% sound a lot bigger? In reality, it's better to use it to compare managers or a manager with a benchmark. Better yet, as part of the Sharpe Ratio, as this brings risk and return together.
I could go on and on, but will bring this to a close. Bottom line: it's easy to calculate (if we can agree on how (didn't address this today)), in common use, and has a Nobel Prize winner's endorsement. Will it go away? Not a chance. If you're not reporting it, you probably should be.
First, is it a risk measure? It depends on who you ask. It's evident that Nobel Laureate Bill Sharpe considers it to be one, since it serves this purpose in his eponymous risk-adjusted measure. Our firm's research has shown that it is the most commonly used risk measure.
And yet, there are many who claim that it does anything but measure risk. What's your definition of risk? If it's the inability to meet a client's objectives, how can standard deviation do this? But, for decades individuals have looked at risk simply as volatility.
As to volatility, is it a measure of volatility or variability? In an e-mail response to this writer, Bill Sharpe said that the two terms can be used in an equivalent manner.
The GIPS(R) (Global Investment Performance Standards) 2010 exposure draft includes a proposed requirement for compliant firms to report the three year annualized standard deviation, which appears to have survived the public's criticism and will be part of the rules, effective 1 January 2011. But, will it be called a "risk measure"? This remains unclear.
Interpreting standard deviation is a challenge, since the result's value will vary based on the return around which it's being measured. Example: your standard deviation is 1 percent; is this good or bad? If your average return is 20%, then to know that roughly two-thirds of the distribution falls within plus-or-minus 1% doesn't seem bad at all, but if your average return is 0.50%, then doesn't 1% sound a lot bigger? In reality, it's better to use it to compare managers or a manager with a benchmark. Better yet, as part of the Sharpe Ratio, as this brings risk and return together.
I could go on and on, but will bring this to a close. Bottom line: it's easy to calculate (if we can agree on how (didn't address this today)), in common use, and has a Nobel Prize winner's endorsement. Will it go away? Not a chance. If you're not reporting it, you probably should be.
Sunday, December 27, 2009
First ten years of the 21st century are already done?
In last week's Wall Street Journal, there was an entire section dedicated to revisiting the end of the first decade of the 21st century. But, on this one, the WSJ has it wrong: they're a year too early.
The first century (A.D.) began in year 1 and ended in year 100 (thus, the end of the first century or first 100 years). The second century began 101 and ended 200 (the second century's conclusion). Fast forward: the 19th century began in 1801 and ended in 1900 (the 19th 100 years). And, the 20th century began in 1901 and concluded in 2000, not 1999! Yes, I know that MANY, MANY people thought that they welcomed in the new millennium January 1, 2000 but they were a year early as the 21st century began January 1, 2001. Therefore, the first decade ends December 31, 2010, not December 31, 2009 as the WSJ (and unfortunately, others) are trying to persuade us to believe.
Time marches on quickly enough without our news media trying to rush it along even faster.
p.s., Think I may be wrong on when the new millennium began? Check this out (towards the bottom)! Looks pretty official to me!
The first century (A.D.) began in year 1 and ended in year 100 (thus, the end of the first century or first 100 years). The second century began 101 and ended 200 (the second century's conclusion). Fast forward: the 19th century began in 1801 and ended in 1900 (the 19th 100 years). And, the 20th century began in 1901 and concluded in 2000, not 1999! Yes, I know that MANY, MANY people thought that they welcomed in the new millennium January 1, 2000 but they were a year early as the 21st century began January 1, 2001. Therefore, the first decade ends December 31, 2010, not December 31, 2009 as the WSJ (and unfortunately, others) are trying to persuade us to believe.
Time marches on quickly enough without our news media trying to rush it along even faster.
p.s., Think I may be wrong on when the new millennium began? Check this out (towards the bottom)! Looks pretty official to me!
Saturday, December 19, 2009
A holiday break
I decided that this week will be a break from blogging. And so, I wish all of our Christian readers a blessed and very Merry Christmas; to our Jewish readers, a somewhat belated Happy Chanukah; and to everyone, a prosperous, healthy, safe and Happy New Year! May 2010 be a wonderful one for all of us.
And finally, as Tiny Tim offered, God Bless Us, Every One!
Thursday, December 17, 2009
Inflation...an interview with John Longo, PhD
Our friend and colleague, Rutgers University professor John Longo, was recently interviewed on CNBC. We suggest you have a look & listen:
Attribution with an index that values only monthly
A client asked me how they should handle the following situation: the benchmark their client assigned to them is a blend of two indexes, one of which values only monthly (the other, daily). They calculate performance attribution daily and smooth the one index's monthly return across each day of the month (thus assuming a nice linear progression through the month). Is this okay?
No, I would say in general it isn't. And, it occurred to me that there's a better solution!
Our client calculates daily transaction-based attribution. It is my contention (shared by others) that transaction-based attribution can easily be done on a monthly basis with no loss in accuracy: there is no need to do it daily. By virtue of the transaction process, any trades that occur are picked up in the weight, which includes the beginning plus weighted flows (i.e., trades). And so, the ideal approach (I believe) is to use monthly transaction based attribution. This is especially true since they have no valid daily benchmark...the daily results are spurious, at best.
No, I would say in general it isn't. And, it occurred to me that there's a better solution!
Our client calculates daily transaction-based attribution. It is my contention (shared by others) that transaction-based attribution can easily be done on a monthly basis with no loss in accuracy: there is no need to do it daily. By virtue of the transaction process, any trades that occur are picked up in the weight, which includes the beginning plus weighted flows (i.e., trades). And so, the ideal approach (I believe) is to use monthly transaction based attribution. This is especially true since they have no valid daily benchmark...the daily results are spurious, at best.
Wednesday, December 16, 2009
Interaction effect: show it or hide it?
One of the controversial topics in performance attribution has to do with interaction. This effect exists in several models, but we'll limit our discussion to its presence in the Brinson-Fachler model. Recall that there are three effects in all: allocation, selection, and interaction (the formulas are shown below).
The interaction effect represents the impact from the interacting of the allocation and selection decisions. There have been several good reasons offered why one shouldn't show interaction; when not showing it we typically change the weight in selection to the portfolio weight, meaning that the selection decision is expanded to include interaction (though this is rarely stated as such).
If we reflect on what the possible results can be with interaction we conclude the following:
During our recent Trends in Attribution (TIA) conference, one panelist pointed out the "flaw" of underweighting times underperformance: a positive result. What DOES one make of this? I think it's easy, after some recent reflection: this shows that the allocation decision was a wise one! That is, they underweighted at a time when there was underperformance (would you propose to overweight?).
In an article I wrote on this topic I proposed that if you want to "eliminate" the interaction effect, then create a "black box" to analyze the interaction effect when it shows up, and to allocate in a conscious and methodical way. I still hold to this belief and still hold to the value of the interaction effect, and oppose any arbitrary assignment to selection or allocation. Space doesn't permit much more at this time, but perhaps I'll take this up again at a later date.
Spaulding, David. "Should the Interaction Effect be Allocated? A 'Black Box' Approach to Interaction." The Journal of Performance Measurement. Spring 2008
The interaction effect represents the impact from the interacting of the allocation and selection decisions. There have been several good reasons offered why one shouldn't show interaction; when not showing it we typically change the weight in selection to the portfolio weight, meaning that the selection decision is expanded to include interaction (though this is rarely stated as such).
If we reflect on what the possible results can be with interaction we conclude the following:
- overweighting (positive) times outperformance (positive) = positive result
- overweighting (positive) times underperformance (negative) = negative result
- underweighting (negative) times outperformance (positive) = negative result
- underweighting (negative) times underperformance (negative) = positive result.
During our recent Trends in Attribution (TIA) conference, one panelist pointed out the "flaw" of underweighting times underperformance: a positive result. What DOES one make of this? I think it's easy, after some recent reflection: this shows that the allocation decision was a wise one! That is, they underweighted at a time when there was underperformance (would you propose to overweight?).
In an article I wrote on this topic I proposed that if you want to "eliminate" the interaction effect, then create a "black box" to analyze the interaction effect when it shows up, and to allocate in a conscious and methodical way. I still hold to this belief and still hold to the value of the interaction effect, and oppose any arbitrary assignment to selection or allocation. Space doesn't permit much more at this time, but perhaps I'll take this up again at a later date.
Spaulding, David. "Should the Interaction Effect be Allocated? A 'Black Box' Approach to Interaction." The Journal of Performance Measurement. Spring 2008
Tuesday, December 15, 2009
Technology survey
We are in the midst of completing our Performance Measurement Technology survey and are anxiously waiting to hear from you regarding your use of technology. So please respond today!
As always:
As always:
- your information is kept confidential
- as a participant, you'll receive a complimentary copy of the results!
How big should our performance measurement team be?
One question that occasionally surfaces is how big a performance department should be. There is no simple answer and it's difficult to decide by simply doing a comparison with other firms. Some key points:
The answer to the amount of staff one needs is hardly a simple one. Hopefully this at least gives you some ideas.
- all firms are different. Okay, maybe a bit of hyperbole, but there are a variety of models that can be applied, which make comparisons difficult.
- assets under management may be a good gauge to say how much you can afford but not necessarily how big you should be.
- number of accounts: the more you have, the larger your staff has to be
- reliance on spreadsheets: the greater the reliance, the greater the amount of manual effort and therefore the greater need for staff. Packaged software typically has many tools that aid the team, thus reducing the manual effort.
- activities of the group: performance measurement can address returns (portfolio and subportfolio), risk, attribution (equity, fixed income, balanced, etc.), and the Global Investment Performance Standards (GIPS(R)). If the firm is only engaged in returns, their staffing needs won't be too great; but, if they take on more tasks, more staff may be needed
- reporting: if the group handles both internal and client reporting, staff is needed; also, if there are custom reports or if the frequency varies from daily to monthly to quarterly, more staff may be needed
- reliance on and support from back office operations: if the operations staff is clued into the needs of performance, they may be helpful in addressing data issues; but, if they are clueless (okay, a bit strong, but you get the point) about their needs, they may not be as helpful as one might hope, meaning more work falls to the performance measurement team.
- client demands: if the firm caters to a lot of individual needs, including custom reporting or fielding inquiries, the workload increases, meaning the need for staff.
The answer to the amount of staff one needs is hardly a simple one. Hopefully this at least gives you some ideas.
Monday, December 14, 2009
VaR: the new benchmark for managing financial risk
I subscribe to a few Google Alerts, so that I'm aware when interesting stories are posted. One is for Value at Risk, and yesterday there was a link to a book by Philippe Jorion with the headline as noted above.
"The new benchmark for managing financial risk?"
Perhaps someone should alert Nassim Taleb of such a statement. I'd say that the jury is definitely out on the value of VaR. And to suggest that it's the "new benchmark" to manage risk is hyperbole run amok. PLEASE!
I'm thinking some dialogue is in order to discuss the various risk measures and their respective merit. I'll begin shortly with my views.
"The new benchmark for managing financial risk?"
Perhaps someone should alert Nassim Taleb of such a statement. I'd say that the jury is definitely out on the value of VaR. And to suggest that it's the "new benchmark" to manage risk is hyperbole run amok. PLEASE!
I'm thinking some dialogue is in order to discuss the various risk measures and their respective merit. I'll begin shortly with my views.
Sunday, December 13, 2009
"we shall see no more financial panics."
With the U.S. Congress's plans to tinker with our current financial structure, including a weakening, or at least shifting, of the Federal Reserve's powers, one might expect someone to utter such a line as is shown above. Actually, this was spoken by Charles S. Hamlin, one of the original seven Federal Reserve Board governors, in 1914 ("Under the Federal Reserve system we shall have no more financial panics"). It was less than ten years before our country observed a panic and we've had several since.
While we might applaud our elected officials for trying to avoid the event we're only now seeming to be coming out of, one must also wonder how much is being done simply for showmanship or to appeal to the electorate. It will also be interesting to see if all of the provisions that are planned aren't eventually overturned by the Supreme Court, given the Federal Reserve's autonomy.
Is congress overreacting? Are their actions appropriate? Unfortunately, the anti-Wall Street fervor has grown to the point where such steps aren't surprising. Hopefully, we won't suffer as a result.
Source for Hamlin quote: Moss, David. The Federal Reserve and the Banking Crisis of 1931. "Harvard Business School Cases."
While we might applaud our elected officials for trying to avoid the event we're only now seeming to be coming out of, one must also wonder how much is being done simply for showmanship or to appeal to the electorate. It will also be interesting to see if all of the provisions that are planned aren't eventually overturned by the Supreme Court, given the Federal Reserve's autonomy.
Is congress overreacting? Are their actions appropriate? Unfortunately, the anti-Wall Street fervor has grown to the point where such steps aren't surprising. Hopefully, we won't suffer as a result.
Source for Hamlin quote: Moss, David. The Federal Reserve and the Banking Crisis of 1931. "Harvard Business School Cases."
Saturday, December 12, 2009
"more regulation, higher risk aversion, more aversion to math."
As I recently mentioned, occasionally I go exploring to look at other blogs. And I came across Steve Hsu's blog. And although Steve is a physics professor at the University of Oregon, he feels comfortable to opine on the investment industry. (Recall that Jose Menchero, too, was a Physics professor and now develops risk models for BARRA). He offered his views on the future, in response to a question that was posed to him.
The question "do you think now would be a good time for someone to consider beginning a career as a quant?" engendered the following from Steve: "A sea change is coming: more regulation, higher risk aversion, more aversion to math.
"On CNBC you can already hear non-mathematical industry types trying to blame it all on models.
"Nevertheless the long term trend is still to greater securitization and more complex derivatives -- i.e., more quants! But hopefully people will be much more skeptical of models and their assumptions. Personally I think it is the more mathematically sophisticated types (or, more specifically, perhaps those who come from physics and other mathematical but data driven subjects ;-) who are likely to be more skeptical about a particular model and how it can fail. People who don't understand math have to take the whole thing as a black box and can't look at individual moving parts."
If you visit Steve's blog you'll see the graphic I've included (with his permission) above. He suggests that it resembles the rather complex network of CDS (Credit Default Swap) contracts. He uses this, and the relationships that existed, to describe the systematic risk which resulted.
He introduces a concept which I found quite interesting: "some entities are too connected to fail, as opposed to too BIG to fail. Systemic risk is all about complexity."
I also have to say that I'm intrigued with his "aversion to math" statement. Finance and investing is all about math. But perhaps the complexity of some of the deals IS too extensive and perhaps needs to be toned back a bit. Will it, though? Only time will tell.
The question "do you think now would be a good time for someone to consider beginning a career as a quant?" engendered the following from Steve: "A sea change is coming: more regulation, higher risk aversion, more aversion to math.
"On CNBC you can already hear non-mathematical industry types trying to blame it all on models.
"Nevertheless the long term trend is still to greater securitization and more complex derivatives -- i.e., more quants! But hopefully people will be much more skeptical of models and their assumptions. Personally I think it is the more mathematically sophisticated types (or, more specifically, perhaps those who come from physics and other mathematical but data driven subjects ;-) who are likely to be more skeptical about a particular model and how it can fail. People who don't understand math have to take the whole thing as a black box and can't look at individual moving parts."
If you visit Steve's blog you'll see the graphic I've included (with his permission) above. He suggests that it resembles the rather complex network of CDS (Credit Default Swap) contracts. He uses this, and the relationships that existed, to describe the systematic risk which resulted.
He introduces a concept which I found quite interesting: "some entities are too connected to fail, as opposed to too BIG to fail. Systemic risk is all about complexity."
I also have to say that I'm intrigued with his "aversion to math" statement. Finance and investing is all about math. But perhaps the complexity of some of the deals IS too extensive and perhaps needs to be toned back a bit. Will it, though? Only time will tell.
Friday, December 11, 2009
What's the standard way to handle cash flows?
Here, I'm addressing cash flows from a return calculation perspective.
First, recall that in two weeks we'll have a requirement for GIPS(R) compliant firms to revalue portfolios for large cash flows, where the firm decides what "large" represents. But what happens when the flows aren't large?
I'd guess that most firms treat all flows as start-of-day events. There are, of course, some firms that treat them as end-of-day events. My recommendation: treat inflows as start-of-day and out-flows as end-of-day. I'd like to provide you with a clear explanation and rationale for this, but I'm unable to do this. Anecdotal evidence suggests that this is the best approach. Of late we've seen many firms and vendors begin to adopt this practice. That doesn't make it "right," but it at least adds support for my position.
Regardless of what approach you take, ensure that you're consistent in its application.
p.s., I just realized I didn't answer my own question! There is NO standard on how cash flows should be handled. GIPS doesn't address this, either. The only "standard" would be to be consistent...i.e., don't use cash flow timing to "game" the process to generate higher returns.
First, recall that in two weeks we'll have a requirement for GIPS(R) compliant firms to revalue portfolios for large cash flows, where the firm decides what "large" represents. But what happens when the flows aren't large?
I'd guess that most firms treat all flows as start-of-day events. There are, of course, some firms that treat them as end-of-day events. My recommendation: treat inflows as start-of-day and out-flows as end-of-day. I'd like to provide you with a clear explanation and rationale for this, but I'm unable to do this. Anecdotal evidence suggests that this is the best approach. Of late we've seen many firms and vendors begin to adopt this practice. That doesn't make it "right," but it at least adds support for my position.
Regardless of what approach you take, ensure that you're consistent in its application.
p.s., I just realized I didn't answer my own question! There is NO standard on how cash flows should be handled. GIPS doesn't address this, either. The only "standard" would be to be consistent...i.e., don't use cash flow timing to "game" the process to generate higher returns.
Wednesday, December 9, 2009
Significant cash flows ... so easy to get wrong
Perhaps it's partly because "significant" and "large" can be viewed synonymously that we find confusion regarding how firms can handle these within GIPS(R). One of our clients was clearly thinking that "significant" means the same as "large" when it comes to the standards, but it doesn't. Significant flows deal with the opportunity to temporarily remove portfolios from composites in the event of large (sorry, I mean significant) cash flows...the idea being that all of a sudden you get a lot of cash and it may take time to invest it, so remove the account while you get the cash invested. On the other hand, effective 1/1/10, GIPS compliant firms will be required to revalue portfolios for large flows.
I'm at an outsourcing client who has a client who NETS cash flows during the month to determine if the account should be removed. NETS cash flows? And why do they do this??? Hopefully we'll find out, but let's think about this.
Their level to remove the account is >25% of the beginning market value. And so, we start with $1 million. On August 3 a $300,000 contribution is made. Then, on August 24, the account withdraws $250,000. Net = 300,000-250,000=5,000; 50,000/1,000,000 < 25%; therefore, don't remove the account. WHAT???
If the firm actually wishes to take advantage of the optional significant cash flow provision, they should have removed the account because of the August 3rd contribution (because presumably there's a bunch of cash they need to get invested). The fact that a few weeks later the client decided to withdraw funds has nothing to do with the earlier flow. In my opinion, they are confused!
Also, in my opinion: this is (a) wrong and (b) shouldn't be permitted by the firm's verifier.
I'm at an outsourcing client who has a client who NETS cash flows during the month to determine if the account should be removed. NETS cash flows? And why do they do this??? Hopefully we'll find out, but let's think about this.
Their level to remove the account is >25% of the beginning market value. And so, we start with $1 million. On August 3 a $300,000 contribution is made. Then, on August 24, the account withdraws $250,000. Net = 300,000-250,000=5,000; 50,000/1,000,000 < 25%; therefore, don't remove the account. WHAT???
If the firm actually wishes to take advantage of the optional significant cash flow provision, they should have removed the account because of the August 3rd contribution (because presumably there's a bunch of cash they need to get invested). The fact that a few weeks later the client decided to withdraw funds has nothing to do with the earlier flow. In my opinion, they are confused!
Also, in my opinion: this is (a) wrong and (b) shouldn't be permitted by the firm's verifier.
Tuesday, December 8, 2009
Still trying to get my arms around the error correction rules
- Non-material: ["Take no action"] error is so minor that no action is needed.
- Non-material: ["Correct the presentation with no disclosure of the change"] you'll (a) correct the error, but (b) not document it (i.e., disclose it in your GIPS presentation) nor (c) not tell anyone
- Non-material: ["Correct the compliant presentation with disclosure of the change and no distribution of the corrected presentation"] you'll (a) correct the error, (b) document it, but (c) not tell anyone
- Material: ["Correct the presentation with disclosure of the change and make every reasonable effort to provide a corrected presentation to all prospective clients and other parties that received the erroneous presentation"] you'll (a) correct it, (b) document it, and (c) tell anyone who got a copy of the previously erroneous copy that an error was corrected.
At last week's Performance Measurement Forum meeting in Orlando we discussed this issue at some length. It appears that you don't have to have rules for all four cases. My advice regarding the four levels (by level):
- Identify the kinds of errors that you'd not bother correcting (immediately spelling and grammatical errors come to mind; you get to decide what else)
- Identify the level that you feel needs to be fixed but is so minor that you don't need to tell anyone (an example here might be a correction that increases your return).
- I don't see a need for this level. I base this on the Q&A, which basically says you don't have to document the error unless you can't determine if you gave a prospect a copy of the prior version. If this is the sole condition, since this level doesn't require redistribution, why would you document it?
- I'd establish the rules that would cause this to happen, but ensure that I've got records of who gets copies of what presentations, so that if a material error IS discovered, I can get them a copy of the revised presentation, if appropriate.
I'm thinking that perhaps a relative rule might be workable. For example, a 5% error of the return itself (example: if the return was 2% and has been corrected to 1.89%, that's 11 bps, which is 11/200 = 5.5%, so it's material). Perhaps at returns > 10% I'd say the level is 10% (again, of the return itself; example: if my return was 12% but corrected to 10.78%, that's a 122 bp drop, and 122/1200 = 10.17%, so it's material).
This approach might be better than simply saying "100 bps" or "50 bps." I'm not saying to adopt these thresholds ... you decide what works for you. But perhaps this approach would provide the necessary flexibility so that the rules will make sense? Your thoughts are, as always, invited.
Value at Risk Article
The current issue of the NYSSA's (New York Society of Security Analysts) journal contains an article I wrote on Value at Risk. I invite you to view it.
It's intended as a basic introduction to how one can calculate VaR using the Variance/Covariance (aka Correlation) method, which was championed by JP Morgan's RiskMetrics, meaning it's in fairly common use. Although we haven't done any research on this, I suspect that this approach is the most used of the three.
At our recent Performance Measurement Forum meeting in Orlando a colleague volunteered to write an article for The Journal of Performance Measurement where he'll provide a broad benefits / shortcomings assessment. We've seen some harshly critical reviews done of late, so a more objective review will be welcome.
I still remain skeptical of VaR's usefulness, but am open to hearing other perspectives. Hope you are, too!
It's intended as a basic introduction to how one can calculate VaR using the Variance/Covariance (aka Correlation) method, which was championed by JP Morgan's RiskMetrics, meaning it's in fairly common use. Although we haven't done any research on this, I suspect that this approach is the most used of the three.
At our recent Performance Measurement Forum meeting in Orlando a colleague volunteered to write an article for The Journal of Performance Measurement where he'll provide a broad benefits / shortcomings assessment. We've seen some harshly critical reviews done of late, so a more objective review will be welcome.
I still remain skeptical of VaR's usefulness, but am open to hearing other perspectives. Hope you are, too!
Saturday, December 5, 2009
Abbreviations & acronyms
This post has NOTHING to do with performance. It's the weekend, and I simply want to comment on the use of abbreviations and acronyms. First, I fully support their use. In fact, I favor even more use of them. Let's first consider the difference: as www.dictionary.com points out, an acronym is "a word formed from the initial letters or groups of letters of words in a set phrase or series of words." Examples: RADAR, ASAP, and WAC. In the world of investment performance we have GIPS. Acronyms are abbreviations, but not all abbreviations are acronyms. For example, "PPS" (performance presentation standards) is an abbreviation, but since it isn't a word (i.e., you can't say it; you only say the letters individually) it's not an acronym. Understand, however, that there are a LOT of folks who would say that PPS IS an acronym, and even though they're technically wrong, society seems to be loosening the strict meaning of this term. But, we won't debate this here.
I happen to be a big fan of acronyms. We named our annual performance conference in such a way that it forms an acronym (PMAR = PeeMar). Our fall event was called TIA (Spanish for Aunt, but that's merely a coincidence and has no relevance).
I spent almost five years in the Army ... the military LOVES acronyms. TRADOC = Training Doctrine Command; USAFAS = You-sa-fas = United States Army Field Artillery School. We already cited WAC, which is Women's Army Corp.
Some military acronyms have become commonly used and often misused. Take, for example, FUBAR and SNAFU. They're actually somewhat profane, though I'll use the softer translations: FUBAR = fowled up beyond all recognition; SNAFU = situation normal, all fowled up. When Bill Clinton was President he once remarked that they "had a SNAFU." I would suggest that first, the President shouldn't use such a term. Second, I don't believe it was the proper way to phrase it, though I won't be a stickler on this point. To me, any time you use a word or expression you should know what it means, otherwise you may offend (take folks who regularly use the Yiddish word "schmuck." This is NOT a nice word and shouldn't be used in mixed company...sorry).
The "word" ASAP is often used and, in my opinion, carries more weight than it's full meaning. If I tell you "I need this report ASAP" versus "I need this report as soon as possible," which sounds more urgent? I suggest the former, even though their meanings are identical.
Three abbreviations that are in common use in the military but that haven't made it into the outside world are IAW, NLT, and COB (COB isn't usually pronounced as a word "cob" but rather is treated as an abbreviation: c-o-b). IAW = in accordance with; NLT = no later than; COB = close of business. For example, "I need your report IAW my memo of July 7th NLT, COB this Friday." Army guys use this wording ALL the time ... shouldn't it fit into our writing, too?
I rarely text on my phone and know that there is a host of abbreviations that folks use to save keystrokes. I am also aware that some young people now write reports for school using these abbreviations, which is causing some concerns: students need to know how to properly write before using shorthand notation. I don't intend to adopt these shorthand expressions in my writing, though I think a well placed abbreviation or acronym can be quite helpful. Hope you agree.
I happen to be a big fan of acronyms. We named our annual performance conference in such a way that it forms an acronym (PMAR = PeeMar). Our fall event was called TIA (Spanish for Aunt, but that's merely a coincidence and has no relevance).
I spent almost five years in the Army ... the military LOVES acronyms. TRADOC = Training Doctrine Command; USAFAS = You-sa-fas = United States Army Field Artillery School. We already cited WAC, which is Women's Army Corp.
Some military acronyms have become commonly used and often misused. Take, for example, FUBAR and SNAFU. They're actually somewhat profane, though I'll use the softer translations: FUBAR = fowled up beyond all recognition; SNAFU = situation normal, all fowled up. When Bill Clinton was President he once remarked that they "had a SNAFU." I would suggest that first, the President shouldn't use such a term. Second, I don't believe it was the proper way to phrase it, though I won't be a stickler on this point. To me, any time you use a word or expression you should know what it means, otherwise you may offend (take folks who regularly use the Yiddish word "schmuck." This is NOT a nice word and shouldn't be used in mixed company...sorry).
The "word" ASAP is often used and, in my opinion, carries more weight than it's full meaning. If I tell you "I need this report ASAP" versus "I need this report as soon as possible," which sounds more urgent? I suggest the former, even though their meanings are identical.
Three abbreviations that are in common use in the military but that haven't made it into the outside world are IAW, NLT, and COB (COB isn't usually pronounced as a word "cob" but rather is treated as an abbreviation: c-o-b). IAW = in accordance with; NLT = no later than; COB = close of business. For example, "I need your report IAW my memo of July 7th NLT, COB this Friday." Army guys use this wording ALL the time ... shouldn't it fit into our writing, too?
I rarely text on my phone and know that there is a host of abbreviations that folks use to save keystrokes. I am also aware that some young people now write reports for school using these abbreviations, which is causing some concerns: students need to know how to properly write before using shorthand notation. I don't intend to adopt these shorthand expressions in my writing, though I think a well placed abbreviation or acronym can be quite helpful. Hope you agree.
Friday, December 4, 2009
Field-work free verifications
As promised, I commented further on the topic of verification firms avoiding "field work" in our newsletter. A colleague from another verification provider responded with the following: "I was reading your most recent article regarding verifiers who do not conduct fieldwork. I just wanted to let you know that I, too, am disheartened by this process. I do not understand how you can do a verification without going to your client’s office. I am talking to a prospect now that told me that their verifier had not been in their offices for more than 5 years, yet they still get a verification report (and, by the way, they are not compliant)."
So, we're not alone in our disdain for such a practice. And, as evidenced by this individual's experience and observation, the absence of field work can often mean that the firm is non-compliant.
Should field work be mandatory? Unclear. But it should be considered "best practice," at a minimum.
So, we're not alone in our disdain for such a practice. And, as evidenced by this individual's experience and observation, the absence of field work can often mean that the firm is non-compliant.
Should field work be mandatory? Unclear. But it should be considered "best practice," at a minimum.
Thursday, December 3, 2009
Dealing with breaks
A client recently asked about dealing with "breaks" in performance. First, what IS a break? We'd define it as a temporary loss of discretion over a client's assets, during which time no trading can be done. Breaks can be caused by changes in custodians as well as for other reasons. Another term for a "break" is a "gap." I opined on this topic a few years ago in our newsletter, regarding the calculation of returns (that is, can you link across gaps for returns). The client's question had to do with GIPS(R).
If you search the GIPS Q&A database you'll only find one item dealing with this topic, and it doesn't really address temporary breaks. I recall discussing this a few years back with a group and we couldn't arrive at any clear consensus. Some thought ANY break meant that performance stops, while others felt that there should be an assessment as to whether or not there would likely have been any trading during the break: if not, then what's the harm in linking across it?
I tend to be in the latter group's camp: that is, when one has a break, they should determine the likelihood of trading occurring. If, for example, the manager is very much a buy-and-hold manager, who trades infrequently, then a gap of even a few weeks might not cause a problem. However, if the manager trades almost daily, then even a short break would be problematic.
Ideally, the manager has other similar accounts that they can compare the account-with-the-break to, to determine if there truly was an absence of trading. This is where the verifier can come in...to provide an additional degree of analysis.
It would be nice to see something formal regarding this topic, but for now there is little guidance. Hopefully mine won't conflict with anything that comes in an official capacity.
If you search the GIPS Q&A database you'll only find one item dealing with this topic, and it doesn't really address temporary breaks. I recall discussing this a few years back with a group and we couldn't arrive at any clear consensus. Some thought ANY break meant that performance stops, while others felt that there should be an assessment as to whether or not there would likely have been any trading during the break: if not, then what's the harm in linking across it?
I tend to be in the latter group's camp: that is, when one has a break, they should determine the likelihood of trading occurring. If, for example, the manager is very much a buy-and-hold manager, who trades infrequently, then a gap of even a few weeks might not cause a problem. However, if the manager trades almost daily, then even a short break would be problematic.
Ideally, the manager has other similar accounts that they can compare the account-with-the-break to, to determine if there truly was an absence of trading. This is where the verifier can come in...to provide an additional degree of analysis.
It would be nice to see something formal regarding this topic, but for now there is little guidance. Hopefully mine won't conflict with anything that comes in an official capacity.
Tuesday, December 1, 2009
Reflections on an old Chinese statistical joke
I've mentioned in the past the value I'm seeing in a design book (Measurement, Design and Analysis, by Pedhazur & Schmelkin) I'm reading for a course. The authors reference a book by H. Zeisel (Say it with figures) who "pointed out that, according to an old Chinese statistical joke, the rate of mortality among people who are visited by a doctor is much higher than among those who are not visited by a doctor."
Reflect for a moment on this joke. Once you get it, think about how this applies to the world of GIPS(R) verifications, when a non-random approach is used.
If the verifier selects only a certain group of composites to review (e.g., "marketed"), might it be quite likely that they will conform with the standards, especially if the firm being verified knows that there's a greater likelihood of only these being checked?
These non-random verifications can be likened to what Pedhazur & Schmelkin refer to as "quasi-experimental designs," "that suffer, to a greater or lesser extent, from serious shortcomings and pitfalls...[and] that utmost circumspection be exercised in the interpretation of the results, and in conclusions...based on them."
Perhaps I'm beginning to sound like a broken record (whatever a "record" is), but by continuing to periodically bring this subject up I am hopeful that the GIPS Verification Subcommittee will take action to come out in opposition to such practices as they are fraught with problems.
If the verifier selects only a certain group of composites to review (e.g., "marketed"), might it be quite likely that they will conform with the standards, especially if the firm being verified knows that there's a greater likelihood of only these being checked?
These non-random verifications can be likened to what Pedhazur & Schmelkin refer to as "quasi-experimental designs," "that suffer, to a greater or lesser extent, from serious shortcomings and pitfalls...[and] that utmost circumspection be exercised in the interpretation of the results, and in conclusions...based on them."
Perhaps I'm beginning to sound like a broken record (whatever a "record" is), but by continuing to periodically bring this subject up I am hopeful that the GIPS Verification Subcommittee will take action to come out in opposition to such practices as they are fraught with problems.
Monday, November 30, 2009
Liquidity risk
In a recent comment (see "Waltzing through the blogospher," November 28, 2009) Steve Campisi wrote about the need to measure liquidity risk, citing the difficulties that the Yale Endowment fund had. It just so happens that this month's Institutional Investor's cover story deals with the huge drop in assets major colleges have seen in their endowments, effective the fiscal year ending this past June 30. The drops have been quite staggering, with the average loss being roughly 20 percent.
I'm intrigued by the notion of liquidity risk, but I can see huge challenges with it, too. How does one properly assess this risk, especially when the variables that can impact it can be quite significant. It was probably a lot easier selling your Dubai World bonds a few weeks ago than it is today, right? During a flight to quality liquidity dries up. If you don't have to sell an asset that is down, you can perhaps afford to wait around to see if it recovers its value. But, there are times when you must sell, thus you encounter liquidity risk and lower prices. Long-Term Capital Management WAS able to recover the value of just about all of their assets...the problem was they couldn't afford to hang around and had to be bailed out. Perhaps stress testing your portfolio would be one way to determine what your risk is.
More details will help, and we hope to see this topic explored in greater detail.
I'm intrigued by the notion of liquidity risk, but I can see huge challenges with it, too. How does one properly assess this risk, especially when the variables that can impact it can be quite significant. It was probably a lot easier selling your Dubai World bonds a few weeks ago than it is today, right? During a flight to quality liquidity dries up. If you don't have to sell an asset that is down, you can perhaps afford to wait around to see if it recovers its value. But, there are times when you must sell, thus you encounter liquidity risk and lower prices. Long-Term Capital Management WAS able to recover the value of just about all of their assets...the problem was they couldn't afford to hang around and had to be bailed out. Perhaps stress testing your portfolio would be one way to determine what your risk is.
More details will help, and we hope to see this topic explored in greater detail.
Calculating returns ... the wrong way
We got a call recently from a firm that wants us to review their method to derive returns. They take their beginning market value plus cash flows to determine an average capital base; they then determine their account's appreciation for the period and calculate a simple average to derive the return. Extremely intuitive, yes?
But sadly wrong, too. (Hopefully you agree).
This isn't the first time we've encountered firms who employ a proprietary approach to derive returns. Not long ago during a conference lunch I was sitting next to an attendee who told me of the approach they had developed. Again, quite intuitive ... one that surely no one would find objectionable. Unfortunately, it, too, was invalid.
Long ago, before the publishing of performance books and various standards, individuals were often compelled to figure out a return methodology on their own. Fortunately, this is no longer the case. So, if you have such a formula in your shop, perhaps you should have it checked out.
But sadly wrong, too. (Hopefully you agree).
This isn't the first time we've encountered firms who employ a proprietary approach to derive returns. Not long ago during a conference lunch I was sitting next to an attendee who told me of the approach they had developed. Again, quite intuitive ... one that surely no one would find objectionable. Unfortunately, it, too, was invalid.
Long ago, before the publishing of performance books and various standards, individuals were often compelled to figure out a return methodology on their own. Fortunately, this is no longer the case. So, if you have such a formula in your shop, perhaps you should have it checked out.
Saturday, November 28, 2009
Canopy Financial ... another scam discovered too late
We are just learning of yet another firm that scammed millions from investors: Canopy Financial. They forged audit reports from KPMG, who apparently WASN'T their auditor. We haven't yet heard whether they also claimed GIPS(R) compliance, so have no way of knowing whether their alleged chicanery extended quite this far.
Canopy had been featured in BusinessWeek and managed to be ranked #12 on Inc's 500 List.
Even though GIPS verifications aren't "designed to detect fraud," they can provide an extra level of confidence in the manager's credibility. We aren't advocating the verification be extended to include fraud detection, but believe that verifiers can serve as an extra level of scrutiny.
Who would have thought to call KPMG to check to ensure that they had, in fact, audited Canopy? Some additional checking may be necessary going forward, especially with certain firms, to help catch these guys quicker. Unfortunately, we can't control shameful behavior, but we have to figure out ways to catch it. In this case, their investment bank actually helped them raises tens of millions of dollars. Is someone asleep at the wheel? Unclear at this time, but we're sure more will be forthcoming. While this doesn't rise to the level of a Bernie Madoff scam, many were no doubt impacted by it.
Canopy had been featured in BusinessWeek and managed to be ranked #12 on Inc's 500 List.
Even though GIPS verifications aren't "designed to detect fraud," they can provide an extra level of confidence in the manager's credibility. We aren't advocating the verification be extended to include fraud detection, but believe that verifiers can serve as an extra level of scrutiny.
Who would have thought to call KPMG to check to ensure that they had, in fact, audited Canopy? Some additional checking may be necessary going forward, especially with certain firms, to help catch these guys quicker. Unfortunately, we can't control shameful behavior, but we have to figure out ways to catch it. In this case, their investment bank actually helped them raises tens of millions of dollars. Is someone asleep at the wheel? Unclear at this time, but we're sure more will be forthcoming. While this doesn't rise to the level of a Bernie Madoff scam, many were no doubt impacted by it.
Waltzing through the blogosphere
I guess it's not surprising that as a blogger, I occasionally wonder around looking at other blogs ... I regularly visit about a dozen and am always looking for new ones to add to my list. Today I've visited several new sites and have picked up a few ideas.
As far as I know it, my blog remains unique in its focus, addressing topics such as the GIPS(R) standards, rates of return, performance attribution, and risk on a regular basis.
To date we've attracted some 500 visitors, which I guess is pretty good as a start. Our newsletter has several thousand subscribers, so perhaps we have a bit to go to get to that level. Visitors have come from every continent and many different countries. And while only a few have signed up as "friends" so far, we know that many circle back regularly. The idea of being a "friend" is that you are notified when a new post is added.
I'm pleased that we've had a few comments regarding posts, as this suggests that some agree while others not with what has been offered.
As far as I know it, my blog remains unique in its focus, addressing topics such as the GIPS(R) standards, rates of return, performance attribution, and risk on a regular basis.
To date we've attracted some 500 visitors, which I guess is pretty good as a start. Our newsletter has several thousand subscribers, so perhaps we have a bit to go to get to that level. Visitors have come from every continent and many different countries. And while only a few have signed up as "friends" so far, we know that many circle back regularly. The idea of being a "friend" is that you are notified when a new post is added.
Photo by chrismar
I'm pleased that we've had a few comments regarding posts, as this suggests that some agree while others not with what has been offered.
I've now been blogging for six months and am open to your ideas, thoughts, suggestions. Feel free to contact me directly (DSpaulding@SpauldingGrp.com) or by posting a comment. Thanks!
Friday, November 27, 2009
Rates of return ... come again?
I stumbled upon a website today that provided the following brief explanation about returns:
"To evaluate the performance of a portfolio manager, you measure average portfolio returns. A rate of return (ROR) is a percentage that reflects the appreciation or depreciation in the value of a portfolio or asset"
We measure "average" returns? I don't think so. Average returns have been shown to have zero value. A classic example: Year 1 +100 return, Year 2 - 50% return, average= (100 - 50) / 2 = 25 percent. Now, let's use some dollars: start with $100; at the end of year 1 you're at $200, then at the end of year 2 you're at $100, meaning zero percent return.
In addition, while there are times when a return will reflect the appreciation or depreciation, once we introduce cash flows, forget about it! Recall that time-weighting can yield funny situations, like having a positive return but losing money.
I guess the lesson is: be careful about what you read on the Internet ... may not always be correct.
"To evaluate the performance of a portfolio manager, you measure average portfolio returns. A rate of return (ROR) is a percentage that reflects the appreciation or depreciation in the value of a portfolio or asset"
We measure "average" returns? I don't think so. Average returns have been shown to have zero value. A classic example: Year 1 +100 return, Year 2 - 50% return, average= (100 - 50) / 2 = 25 percent. Now, let's use some dollars: start with $100; at the end of year 1 you're at $200, then at the end of year 2 you're at $100, meaning zero percent return.
In addition, while there are times when a return will reflect the appreciation or depreciation, once we introduce cash flows, forget about it! Recall that time-weighting can yield funny situations, like having a positive return but losing money.
I guess the lesson is: be careful about what you read on the Internet ... may not always be correct.
Thursday, November 26, 2009
Happy Thanksgiving!!!
While we in the United States celebrate Thanksgiving today, everyone should no doubt be able to reflect on the things they give thanks for. My list is so very long. It includes our management and staff, our clients, our vendors, and colleagues. My wife and family are blessings in my life. The fact that I'm fortunate to live in a country that affords us so many freedoms is also something to give thanks for.
The fact that our firm managed to weather this severe market downturn offers us something to give thanks for. We know that this past year has been a challenging one for just about everyone, with many who suffered greatly. We're seeing a turnaround and look forward to much improvement in the coming year.
Perhaps this year I'm most thankful for the birth of our grandson, Brady, who will turn four months old this coming Tuesday.
We wish you and your family a blessed and wonderful Thanksgiving.
The fact that our firm managed to weather this severe market downturn offers us something to give thanks for. We know that this past year has been a challenging one for just about everyone, with many who suffered greatly. We're seeing a turnaround and look forward to much improvement in the coming year.
Perhaps this year I'm most thankful for the birth of our grandson, Brady, who will turn four months old this coming Tuesday.
We wish you and your family a blessed and wonderful Thanksgiving.
Wednesday, November 25, 2009
No fieldwork necessary
I am putting the finishing touches on this month's newsletter, and am including comments regarding the revelation that certain GIPS(R) verifiers feel that it's not necessary for them to conduct "fieldwork." That is, they feel that they can conduct a thorough verification from the comfort of their offices.
In our verification advertising piece we state that we don't conduct "remote verifications." Sorry, but we're not quite that good. We feel we NEED to be in our clients' offices to review their records, investigate issues that might arise, and engage in spontaneous dialogue, when necessary. We also enjoy meeting with our clients face-to-face, in order to enhance our relationship with them. We consider our clients friends, and look forward to these visits.
The quality of verifications has been questioned for as long as the work has been done. We wanted a process to verify the verifiers back in 1992. We at least need some guidance regarding this matter: hopefully it will be forthcoming.
In our verification advertising piece we state that we don't conduct "remote verifications." Sorry, but we're not quite that good. We feel we NEED to be in our clients' offices to review their records, investigate issues that might arise, and engage in spontaneous dialogue, when necessary. We also enjoy meeting with our clients face-to-face, in order to enhance our relationship with them. We consider our clients friends, and look forward to these visits.
The quality of verifications has been questioned for as long as the work has been done. We wanted a process to verify the verifiers back in 1992. We at least need some guidance regarding this matter: hopefully it will be forthcoming.
Monday, November 23, 2009
More precise returns ... why did it take SO long???
I'm in Toronto for a couple days teaching a performance class for a client. And, as often happens, a thought occurred that seemed interesting.
Peter Dietz introduced the "original Dietz formula" in 1966, which treats cash flows as occurring mid-period; this was adjusted several years later to day-weight the flows (what has become known as the "Modified Dietz Formula"). Why didn't he introduce this earlier?
Also, in 1968 the Bank Administration Institute pointed out that the Exact Method (where we revalue portfolios whenever flows occur) is the best approach to time-weighting. And so why didn't they encourage firms to use it sooner?
In both cases the answer is the same: technology. In 1966 and 1968 we didn't have personal computers, spreadsheets, or even calculators. No doubt many folks calculated returns by hand (math by hand? perish the thought!). To do a mid-point method is pretty simple by hand but to add day-weighting could complicate the process quite a bit.
And to use the Exact method would have required access to daily prices: in the mid '60s? Right! Forget about it!
Makes sense, yes?
Peter Dietz introduced the "original Dietz formula" in 1966, which treats cash flows as occurring mid-period; this was adjusted several years later to day-weight the flows (what has become known as the "Modified Dietz Formula"). Why didn't he introduce this earlier?
Also, in 1968 the Bank Administration Institute pointed out that the Exact Method (where we revalue portfolios whenever flows occur) is the best approach to time-weighting. And so why didn't they encourage firms to use it sooner?
In both cases the answer is the same: technology. In 1966 and 1968 we didn't have personal computers, spreadsheets, or even calculators. No doubt many folks calculated returns by hand (math by hand? perish the thought!). To do a mid-point method is pretty simple by hand but to add day-weighting could complicate the process quite a bit.
And to use the Exact method would have required access to daily prices: in the mid '60s? Right! Forget about it!
Makes sense, yes?
Friday, November 20, 2009
Classics reviewed!
We're pleased to announce that our new publication, Classics in Investment Performance Measurement, was reviewed by Jerry Tempelman, CFA, in the November issue of the CFA Institute and CIPM program's Investment Performance Measurement Newsletter.
The book has been quite a success and has been well received by the industry. We're very pleased and grateful for Jerry's review and its appearance in the newsletter.
If you'd like more information about the book contact Patrick Fowler (PFowler@SpauldingGrp.com; 732-873-5700) or visit our webstore.
The book has been quite a success and has been well received by the industry. We're very pleased and grateful for Jerry's review and its appearance in the newsletter.
If you'd like more information about the book contact Patrick Fowler (PFowler@SpauldingGrp.com; 732-873-5700) or visit our webstore.
Thursday, November 19, 2009
TIA III has arrived
The Spaulding Group, Inc. is hosting its 3rd annual Trends in Attribution Conference. This year's program is at the Heldrich Hotel in New Brunswick, NJ. Our turnout is much better than one might have expected, given the economy ... we have more folks here this year than in 2008!!!
Our sponsors for this year's event are:
Our sponsors for this year's event are:
- The CIPM Program
- DST Global
- Eagle Investment Systems
- RIMES
- SS&C
- StatPro
- Wilshire Analytics.
Wednesday, November 18, 2009
Attribution ... without securities?!?!?!?
At last week's Performance Measurement Forum meeting in Rome we briefly discussed the issue of "pricing effects." That is, the effect that can arise when your portfolio's prices don't match what's in the index. Recall that we discussed this on November 4.
What I failed to mention earlier is this: what happens if you don't have the index's constituents? For example, what happens if your bond index doesn't give you the securities and the details (such as prices) they include?
A: you obviously won't know whether or not there IS a pricing effect.
B: if there is one, you're out of luck! Unless you can persuade the index provider to give up these details, you can't report on the effect. MEANING that your selection effect will be less than accurate. How "less than"? We just won't know. Sorry :-(
Can you still have attribution if you're missing security details? YES, of course! As long as you have the market values and weights for sectors or subsectors or other groupings you're interested in, you can run attribution! :-)
What I failed to mention earlier is this: what happens if you don't have the index's constituents? For example, what happens if your bond index doesn't give you the securities and the details (such as prices) they include?
A: you obviously won't know whether or not there IS a pricing effect.
B: if there is one, you're out of luck! Unless you can persuade the index provider to give up these details, you can't report on the effect. MEANING that your selection effect will be less than accurate. How "less than"? We just won't know. Sorry :-(
Can you still have attribution if you're missing security details? YES, of course! As long as you have the market values and weights for sectors or subsectors or other groupings you're interested in, you can run attribution! :-)
Tuesday, November 17, 2009
What about money weighting?
I got an e-mail from a retail client this week. That is, a retail client, whose rep works for one of our brokerage clients. This hadn't happened before.
This individual's rep had passed him one of the issues of our newsletter, to explain how they calculate their returns. This apparently didn't satisfy the end client, who sought clarity from me.
He indicated that he found the way returns are calculated by his broker too confusing, and advocated a "money weighted" approach: music to my ears! He went to the trouble to show me the IRR formula, which I thought was kind of funny. In my response I indicated that I've often commented favorably about the IRR and money-weighting, and suggested he review other issues of the newsletter (which is no easy task, given that we're now in our 7th year, and so have quite a lot of issues out there (though we do provide summaries of each on the website)).
I have found that when you show clients (institutional or retail) money-weighted returns, they feel that the returns are much more meaningful. Granted, time-weighting has its place, and shouldn't be replaced by money-weighting to represent how the manager did (save for private equity managers). Our crusade to get more firms to adopt money-weighting continues to gain new followers.
This individual's rep had passed him one of the issues of our newsletter, to explain how they calculate their returns. This apparently didn't satisfy the end client, who sought clarity from me.
He indicated that he found the way returns are calculated by his broker too confusing, and advocated a "money weighted" approach: music to my ears! He went to the trouble to show me the IRR formula, which I thought was kind of funny. In my response I indicated that I've often commented favorably about the IRR and money-weighting, and suggested he review other issues of the newsletter (which is no easy task, given that we're now in our 7th year, and so have quite a lot of issues out there (though we do provide summaries of each on the website)).
I have found that when you show clients (institutional or retail) money-weighted returns, they feel that the returns are much more meaningful. Granted, time-weighting has its place, and shouldn't be replaced by money-weighting to represent how the manager did (save for private equity managers). Our crusade to get more firms to adopt money-weighting continues to gain new followers.
Monday, November 16, 2009
Value at risk ... "where's the value?"
At last week's Performance Measurement Forum meeting in Rome I mentioned how during this most recent economic crisis the Value at Risk metric has demonstrated how little value it provides: what firm's use of this measure provided them with any degree of accuracy? And yet the measure clearly has its supporters.
I am wrapping up an article for the New York Society of Security Analysts' (NYSSA) journal on this topic: specifically the benefits and shortcomings of VaR. In a nutshell, the measure on the surface seems like an excellent one as it offers a very intuitive view of a portfolio's risk: "the most you can lose is $5 million..." Simple. Easy to grasp. And, it's a forward looking measure as opposed to one that says what the risk was. How better to report risk? There are, of course, two other bits of information that go along with such a report: "...over the next week at a 95% confidence level."
The addition of the time period only enhances VaR's value. BUT, the confidence level can be a bit confusing. What does it mean? Well first, the 95% shows us that this is the worst possible loss that can occur within this range; the missing 5% means that it can, in reality, be worse ... perhaps a lot worse.
I won't go into more detail on this topic here; you can "read all about it" when my article is published. Suffice it to say, the VaR critics have been having a field day since the most recent market downturn hit a year ago.
I am wrapping up an article for the New York Society of Security Analysts' (NYSSA) journal on this topic: specifically the benefits and shortcomings of VaR. In a nutshell, the measure on the surface seems like an excellent one as it offers a very intuitive view of a portfolio's risk: "the most you can lose is $5 million..." Simple. Easy to grasp. And, it's a forward looking measure as opposed to one that says what the risk was. How better to report risk? There are, of course, two other bits of information that go along with such a report: "...over the next week at a 95% confidence level."
The addition of the time period only enhances VaR's value. BUT, the confidence level can be a bit confusing. What does it mean? Well first, the 95% shows us that this is the worst possible loss that can occur within this range; the missing 5% means that it can, in reality, be worse ... perhaps a lot worse.
I won't go into more detail on this topic here; you can "read all about it" when my article is published. Suffice it to say, the VaR critics have been having a field day since the most recent market downturn hit a year ago.
Friday, November 13, 2009
Does this help??? Perhaps, but it's still confusing.
Today at our European Forum meeting we learned that a Q&A has been issued regarding the Error & Correction Guidance Statement. Recall that this GS includes a requirement to report material errors in a presentation for a period of 12 months; this was a major change to the initial draft. The GIPS 2010 disclosure draft included this provision and it was soundly criticized by the public, and was therefore withdrawn. Consequently, we have a GS which goes into effect in 1 1/2 months and a standard that won't include it.
The Q&A is available on the GIPS website, and reads as follows:
The GIPS Guidance Statement on Error Correction states that firms must disclose in a compliant presentation any changes resulting from a material error for at least 12 months following the correction of the presentation. Does this mean that we have to disclose that a material error occurred to prospective clients that we know have not received the erroneous presentation? Firms are not required to disclose the material error in a compliant presentation that is provided to prospective clients that did not receive the erroneous presentation. However, for a minimum of 12 months following the correction of the presentation, if the firm is not able determine if a particular prospective client has received the materially erroneous presentation, then the prospective client must receive the corrected presentation containing disclosure of the material error. This may result in the preparation of two versions of the corrected compliant presentation to be used for a minimum of 12 months following the correction of the presentation.
This is a major change to what's in the GS and for now it doesn't appear that the GS is going to be revised. Consequently, firms have to be aware of this non-subtle change.
On the GIPS website there's also a list of what's been decided so far, that includes the following as it relates to this matter:
Error Correction – The EC decided to remove the requirement to disclose for 12 months any changes in a compliant presentation resulting from a material error. This requirement was drawn from the Error Correction Guidance Statement which goes into effect on 1 January 2010. The EC stated that it is not the intent to force firms to disclose errors to parties that never received the erroneous presentation. The EC committed to reviewing the Error Correction Guidance Statement as soon as possible and will issue any necessary clarifications. Until such time, firms are reminded that the Error Correction Guidance Statement will become effective in its current form on 1 January 2010.
I'm pleased to see that it isn't the EC's intent to "force firms to disclose errors to parties that never received the erroneous presentation." Unfortunately, with the GS that was published and a Q&A which may not get a lot of attention, there will no doubt be a fair amount of confusion. I would hope that the GS would be withdrawn completely until it can be recrafted and reintroduced, perhaps with public comment.
The Q&A is available on the GIPS website, and reads as follows:
The GIPS Guidance Statement on Error Correction states that firms must disclose in a compliant presentation any changes resulting from a material error for at least 12 months following the correction of the presentation. Does this mean that we have to disclose that a material error occurred to prospective clients that we know have not received the erroneous presentation? Firms are not required to disclose the material error in a compliant presentation that is provided to prospective clients that did not receive the erroneous presentation. However, for a minimum of 12 months following the correction of the presentation, if the firm is not able determine if a particular prospective client has received the materially erroneous presentation, then the prospective client must receive the corrected presentation containing disclosure of the material error. This may result in the preparation of two versions of the corrected compliant presentation to be used for a minimum of 12 months following the correction of the presentation.
This is a major change to what's in the GS and for now it doesn't appear that the GS is going to be revised. Consequently, firms have to be aware of this non-subtle change.
On the GIPS website there's also a list of what's been decided so far, that includes the following as it relates to this matter:
Error Correction – The EC decided to remove the requirement to disclose for 12 months any changes in a compliant presentation resulting from a material error. This requirement was drawn from the Error Correction Guidance Statement which goes into effect on 1 January 2010. The EC stated that it is not the intent to force firms to disclose errors to parties that never received the erroneous presentation. The EC committed to reviewing the Error Correction Guidance Statement as soon as possible and will issue any necessary clarifications. Until such time, firms are reminded that the Error Correction Guidance Statement will become effective in its current form on 1 January 2010.
I'm pleased to see that it isn't the EC's intent to "force firms to disclose errors to parties that never received the erroneous presentation." Unfortunately, with the GS that was published and a Q&A which may not get a lot of attention, there will no doubt be a fair amount of confusion. I would hope that the GS would be withdrawn completely until it can be recrafted and reintroduced, perhaps with public comment.
Thursday, November 12, 2009
Performance Measurement Forum's Autumn 2009 Meeting: Rome
Patrick Fowler and I are in Rome this week for the Autumn session for the European chapter of the group. This is the second time we've held the meeting in this beautiful and historic city (the first time was when Italy was still using lira) and the third in Italy (we held a meeting in Milan not long ago).
As always, we expect the sessions to be quite interesting and energetic. So much is going on and so much has happened since we last met in the Spring. This is the first meeting since we established the blog in June.
The Forum is a members-only group that's been in existence for 11 years. Several of our members have belonged since the beginning: in fact, our first member of the group came from Europe (we actually launched the North America chapter a bit earlier than the European one).
Because of the group's high degree of interaction, we limit the number of members. We are pleased that a couple members from the United States will join us this week. Not only is the meeting a great excuse to visit this great city, many find the subtle differences between the two regions worth investigating.
I can't go into a great deal of detail regarding what we discuss, but will share some highlights over the coming days. Ciao!
To learn more about the forum, contact Patrick Fowler at 732-873-5700 or PFowler@SpauldingGrp.com.
As always, we expect the sessions to be quite interesting and energetic. So much is going on and so much has happened since we last met in the Spring. This is the first meeting since we established the blog in June.
The Forum is a members-only group that's been in existence for 11 years. Several of our members have belonged since the beginning: in fact, our first member of the group came from Europe (we actually launched the North America chapter a bit earlier than the European one).
Because of the group's high degree of interaction, we limit the number of members. We are pleased that a couple members from the United States will join us this week. Not only is the meeting a great excuse to visit this great city, many find the subtle differences between the two regions worth investigating.
I can't go into a great deal of detail regarding what we discuss, but will share some highlights over the coming days. Ciao!
To learn more about the forum, contact Patrick Fowler at 732-873-5700 or PFowler@SpauldingGrp.com.
Wednesday, November 11, 2009
Sampling ... what does it mean?
The GIPS (r) standards allow verifiers to use sampling to conduct their reviews. This makes perfect sense ... otherwise, the costs might be prohibitive if every account, for every time period, for every composite had to be checked. Also, sampling has long been an acceptable method to test hypotheses, evaluate opinions, and conduct research.
As Pedhazur & Schmelkin point out, "Sampling permeates nearly every facet of our lives...decisions, impressions, opinions, beliefs, and the like are based on partial information...limited observations, bits and pieces of information, are generally resorted to when forming impressions and drawing conclusions about people, groups, objects, events, and other aspects of our environment." They reference Samuel Johnson who said, "You don't have to eat the whole ox to know that the meat is tough." They also wrote that "Formal sampling is a process aimed at obtaining a representative portion of some whole, thereby affording valid inferences and generalizations to it."
But what DOES sampling mean in the world of GIPS verifications? Presumably, its the selection of an adequate number of observations to yield enough information about the firm in order to allow the verifier to draw a reasonable conclusion regarding the firm's composite construction process. But what percentage is adequate? The standards offers no guidance.
Perhaps this is like the word "obscenity" and the comment former U.S. Supreme Court Justice Potter Steward famously remarked (a statement that my friend and associate Herb Chain often cited when we taught GIPS courses together): that he didn't know how to define it, but knew it when he saw it. This can be further likened to the word "materiality," which is difficult to pin down in much detail. But after a (very) brief review of opinions from other firms we quickly realized that there is some disparity regarding sampling and verification.
We are speaking with a client that has roughly 1,000 composites ... a very large number by anyone's scale, yes? What size would constitute a relevant sample? We did a "mini survey" and, perhaps not surprisingly, got a mix of responses. At the low end we have roughly 2% and at the high end 10-15 percent. We tend to lean more towards the 10-15% figure, with an expectation that we would look at the composites the firm markets, and also at additional composites which would be selected on a mix of "random" and "non-random" bases. In other words, each year we won't be looking at the same composites, but will vary many of them.
What if the verifier only looks at the firm's "marketed" composites? Some might think this makes sense, since it focuses on those composites that will most likely be presented to prospects. But if the firm knows that only these composites will be reviewed, what motivation is there to bother with the others? Such a selection is "biased," and hardly considered a fair way to evaluate a firm's compliance. A more appropriate approach would be to include "marketed" composites, but also select a random number of "non-marketed," in order to conduct a better, more conclusive and objective test.
It's important to remember that the GIPS standards do not make a distinction between "marketed" and "non-marketed" composites. In fact, the terms don't exist within the standards. Unfortunately, certain verifiers have, over the years, promoted the notion that firms need only be concerned with the "marketed" ones, in spite of being corrected on this multiple times. Such a posture only results in confusion and, unfortunately, many firms believing they're compliant when, in fact, they aren't: compliance is at the "firm" level, not at the "composite" or "marketed composite" levels.
RESOURCE:
Pedhazur, Elazar J. & Liora Pedhazur Schmelkin. Measurement, Design and Analysis. Psychology Press. 1991.
As Pedhazur & Schmelkin point out, "Sampling permeates nearly every facet of our lives...decisions, impressions, opinions, beliefs, and the like are based on partial information...limited observations, bits and pieces of information, are generally resorted to when forming impressions and drawing conclusions about people, groups, objects, events, and other aspects of our environment." They reference Samuel Johnson who said, "You don't have to eat the whole ox to know that the meat is tough." They also wrote that "Formal sampling is a process aimed at obtaining a representative portion of some whole, thereby affording valid inferences and generalizations to it."
But what DOES sampling mean in the world of GIPS verifications? Presumably, its the selection of an adequate number of observations to yield enough information about the firm in order to allow the verifier to draw a reasonable conclusion regarding the firm's composite construction process. But what percentage is adequate? The standards offers no guidance.
Perhaps this is like the word "obscenity" and the comment former U.S. Supreme Court Justice Potter Steward famously remarked (a statement that my friend and associate Herb Chain often cited when we taught GIPS courses together): that he didn't know how to define it, but knew it when he saw it. This can be further likened to the word "materiality," which is difficult to pin down in much detail. But after a (very) brief review of opinions from other firms we quickly realized that there is some disparity regarding sampling and verification.
We are speaking with a client that has roughly 1,000 composites ... a very large number by anyone's scale, yes? What size would constitute a relevant sample? We did a "mini survey" and, perhaps not surprisingly, got a mix of responses. At the low end we have roughly 2% and at the high end 10-15 percent. We tend to lean more towards the 10-15% figure, with an expectation that we would look at the composites the firm markets, and also at additional composites which would be selected on a mix of "random" and "non-random" bases. In other words, each year we won't be looking at the same composites, but will vary many of them.
What if the verifier only looks at the firm's "marketed" composites? Some might think this makes sense, since it focuses on those composites that will most likely be presented to prospects. But if the firm knows that only these composites will be reviewed, what motivation is there to bother with the others? Such a selection is "biased," and hardly considered a fair way to evaluate a firm's compliance. A more appropriate approach would be to include "marketed" composites, but also select a random number of "non-marketed," in order to conduct a better, more conclusive and objective test.
It's important to remember that the GIPS standards do not make a distinction between "marketed" and "non-marketed" composites. In fact, the terms don't exist within the standards. Unfortunately, certain verifiers have, over the years, promoted the notion that firms need only be concerned with the "marketed" ones, in spite of being corrected on this multiple times. Such a posture only results in confusion and, unfortunately, many firms believing they're compliant when, in fact, they aren't: compliance is at the "firm" level, not at the "composite" or "marketed composite" levels.
RESOURCE:
Pedhazur, Elazar J. & Liora Pedhazur Schmelkin. Measurement, Design and Analysis. Psychology Press. 1991.
Saturday, November 7, 2009
Trends in Attribution ... near record attendance
In spite of this year's market downturn, The Spaulding Group's upcoming Trends in Attribution Symposium (TIA) will have a very good turnout. We're obviously quite pleased by this.
With roughly two weeks to go there's still time to be a part of this event. It's a single day that's dedicated to this important topic. We've assembled a great group of speakers to address a variety of issues. We will repeat our popular "Fast Attribution" session, which involves a group of panelists who touch on a host of topics in rapid succession.
To learn more, visit the conference website, contact Patrick Fowler (PFowler@SpauldingGrp.com) or Chris Spaulding (CSpaulding@SpauldingGrp.com), or call our offices (732-873-5700).
With roughly two weeks to go there's still time to be a part of this event. It's a single day that's dedicated to this important topic. We've assembled a great group of speakers to address a variety of issues. We will repeat our popular "Fast Attribution" session, which involves a group of panelists who touch on a host of topics in rapid succession.
To learn more, visit the conference website, contact Patrick Fowler (PFowler@SpauldingGrp.com) or Chris Spaulding (CSpaulding@SpauldingGrp.com), or call our offices (732-873-5700).
Friday, November 6, 2009
Cooking & GIPS Discretion
In explaining GIPS(R) discretion to a client, I hit upon a metaphor: cooking.
Let's say you go out to a fancy restaurant that is serving the "chef's special" that evening. It sounds quite appealing, except you'd like to alter it in some way. Perhaps instead of the fish being cooked medium, as the chef suggests, you want it rare or well done. Or perhaps you ask that a different sauce be used.
The waiter goes back to the chef with your request. The first option is that the chef refuses to do what you ask: if you won't eat it as he/she recommends, then you can go elsewhere. This is equivalent to the firm that refuses any restrictions: you take our investment strategy as we define it or find another manager.
Let's say that the chef says "fine, I will do what the customer asks, but this will not be representative of my special. Please don't suggest to others that they ask this patron how the meal tastes, because it has been altered and no longer represents either my skill or preferences. This is equivalent to a portfolio being considered "non-discretionary" for GIPS purposes.
What if the request is quite minor (instead of preparing the fish medium, please make it medium well)? The chef might be happy to do this and believe that the change is such that the customer will still benefit from his/her creativity and cooking skills. This is like a portfolio with restrictions that is deemed "discretionary" for GIPS purposes: the request is a minor one, such that the account will look very much like the other accounts in the composite.
There's one more variation. Let's say that the request is extreme enough that the meal will not represent the "Chef's special." However, what the customer has asked for sounds like a great idea. An example from my personal experience may help: one of the restaurants we frequent serves a pasta dish with shrimp; I ask that they substitute scallops. CLEARLY this has altered the meal enough that it won't represent the originally advertised item. However, the chef may decide that he/she likes this idea and adds it to the menu. This is what can happen when a client imposes a restriction that alters the account such that it won't represent the strategy, but ends up being a new product. The example I often use in our training classes is the case of "no sin stocks." Perhaps the result will not represent the strategy but might cause the firm to create a new composite, which is a variation of the first, such that the firm now has two somewhat similar composites. For example, "U.S. Equities with sin" and "U.S. Equities without sin." Okay, maybe you won't call them this, but you get the idea.
Discretion, from a GIPS perspective, can be confusing. We hope this helps!
Let's say you go out to a fancy restaurant that is serving the "chef's special" that evening. It sounds quite appealing, except you'd like to alter it in some way. Perhaps instead of the fish being cooked medium, as the chef suggests, you want it rare or well done. Or perhaps you ask that a different sauce be used.
The waiter goes back to the chef with your request. The first option is that the chef refuses to do what you ask: if you won't eat it as he/she recommends, then you can go elsewhere. This is equivalent to the firm that refuses any restrictions: you take our investment strategy as we define it or find another manager.
Let's say that the chef says "fine, I will do what the customer asks, but this will not be representative of my special. Please don't suggest to others that they ask this patron how the meal tastes, because it has been altered and no longer represents either my skill or preferences. This is equivalent to a portfolio being considered "non-discretionary" for GIPS purposes.
What if the request is quite minor (instead of preparing the fish medium, please make it medium well)? The chef might be happy to do this and believe that the change is such that the customer will still benefit from his/her creativity and cooking skills. This is like a portfolio with restrictions that is deemed "discretionary" for GIPS purposes: the request is a minor one, such that the account will look very much like the other accounts in the composite.
There's one more variation. Let's say that the request is extreme enough that the meal will not represent the "Chef's special." However, what the customer has asked for sounds like a great idea. An example from my personal experience may help: one of the restaurants we frequent serves a pasta dish with shrimp; I ask that they substitute scallops. CLEARLY this has altered the meal enough that it won't represent the originally advertised item. However, the chef may decide that he/she likes this idea and adds it to the menu. This is what can happen when a client imposes a restriction that alters the account such that it won't represent the strategy, but ends up being a new product. The example I often use in our training classes is the case of "no sin stocks." Perhaps the result will not represent the strategy but might cause the firm to create a new composite, which is a variation of the first, such that the firm now has two somewhat similar composites. For example, "U.S. Equities with sin" and "U.S. Equities without sin." Okay, maybe you won't call them this, but you get the idea.
Discretion, from a GIPS perspective, can be confusing. We hope this helps!
Thursday, November 5, 2009
"New" Attribution Effects - II
Yesterday I discussed the "pricing" effect. Today I want to briefly touch on another "new" effect. By "new," I don't necessarily mean that it's a recent introduction of an effect, but it's new relative to the other standard effects that we often see reported on. I don't recall seeing anything in writing on these effects before, so hope that this will prove helpful.
This second effect is the "trading" effect, which reflects the contribution to the excess return that results from trading activity during the period. This should not be confused with the transaction cost measurement, which is a whole separate science, so to speak, that measures such things as "market impact" and "value average weighted price" (VWAP, for short) where assess how efficient the firm is in executing trades. The latter should, in theory, be part of attribution analysis, too, but I'm not aware that this commonly done today.
By "trading" effect, we are taking into consideration the trades that take place during the period versus a situation where no trading was done. Here, we essentially are comparing the results from a holdings-based model (which ignores trades) with a transaction-based model (which incorporates them into the analysis).
The way I've heard to derive this effect is to use both a holdings and transaction-based model, and take the difference: the result is the trading effect.
One might ask what the value is of this effect. Arguably, it has little value as it (in my opinion) points out the difference between the two models, where one would expect that the transaction-based results would be superior. Is it worth the time and cost to derive these results? For what purpose are they being employed? I'm aware that some folks do, in fact, calculate them; just not sure if I'd recommend doing so.
To me, the greater value would be to tie your transaction cost measurement analysis into attribution. As noted above, I don't believe this is common today, but should be pursued. Something to discuss further, no doubt.
This second effect is the "trading" effect, which reflects the contribution to the excess return that results from trading activity during the period. This should not be confused with the transaction cost measurement, which is a whole separate science, so to speak, that measures such things as "market impact" and "value average weighted price" (VWAP, for short) where assess how efficient the firm is in executing trades. The latter should, in theory, be part of attribution analysis, too, but I'm not aware that this commonly done today.
By "trading" effect, we are taking into consideration the trades that take place during the period versus a situation where no trading was done. Here, we essentially are comparing the results from a holdings-based model (which ignores trades) with a transaction-based model (which incorporates them into the analysis).
The way I've heard to derive this effect is to use both a holdings and transaction-based model, and take the difference: the result is the trading effect.
One might ask what the value is of this effect. Arguably, it has little value as it (in my opinion) points out the difference between the two models, where one would expect that the transaction-based results would be superior. Is it worth the time and cost to derive these results? For what purpose are they being employed? I'm aware that some folks do, in fact, calculate them; just not sure if I'd recommend doing so.
To me, the greater value would be to tie your transaction cost measurement analysis into attribution. As noted above, I don't believe this is common today, but should be pursued. Something to discuss further, no doubt.
Wednesday, November 4, 2009
"New" attribution effects - I
We recently met with a client for whom we're designing a fixed income attribution system. During our meeting the subject of the "pricing" or "price difference" effect came up. This effect identifies the impact when the portfolio and benchmark have different prices for the same security. This is more likely to happen with bonds, because (a) they're less liquid and (b) for the most part they aren't exchange traded, so we probably won't have market prices for most of them.
The conundrum firms face when they encounter different prices is how to deal with them: should they reprice the benchmark with the portfolio's prices or vice-versa? Neither of these options is very good: if you reprice the benchmark, then its return won't match what's published; if you reprice the portfolio then you're using prices which you don't feel are correct and you won't match the return that may be shown in other reports.
The "pricing effect" is a better way to deal with this as it provides visibility without altering returns. It may, however, raise questions which you'll have to be prepared to answer. And, it can only be done if you have the benchmark's constituents (if you don't, then you won't be able to identify pricing inconsistencies).
This topic deserves more detail then we can provide here, so I'll take it up in our newsletter. Stay tuned!
The conundrum firms face when they encounter different prices is how to deal with them: should they reprice the benchmark with the portfolio's prices or vice-versa? Neither of these options is very good: if you reprice the benchmark, then its return won't match what's published; if you reprice the portfolio then you're using prices which you don't feel are correct and you won't match the return that may be shown in other reports.
The "pricing effect" is a better way to deal with this as it provides visibility without altering returns. It may, however, raise questions which you'll have to be prepared to answer. And, it can only be done if you have the benchmark's constituents (if you don't, then you won't be able to identify pricing inconsistencies).
This topic deserves more detail then we can provide here, so I'll take it up in our newsletter. Stay tuned!
Tuesday, November 3, 2009
Announcing our question & answer protocol
Through this blog I recently received a question that wasn't related to a specific post. I opted not to respond because (a) I didn't know who it was from (it was sent anonymously) and (b) it didn't fit what it was tied to. I will be happy to respond to questions relating to a blog piece, whether they're sent anonymously or not.
And, I will be happy (usually) to respond to questions to non-blog-initiated topics, provided the sender identifies themselves. Feel free to ask questions regarding GIPS(R), attribution, risk measurement, returns, etc. You can send these directly to my e-mail address (DSpaulding@SpauldingGrp.com). If we feel the question should be responded to in the blog or our newsletter, we will do this, and the questioner will be sent a response directly, too.
Hope this sounds reasonable. As always, your thoughts are invited.
And, I will be happy (usually) to respond to questions to non-blog-initiated topics, provided the sender identifies themselves. Feel free to ask questions regarding GIPS(R), attribution, risk measurement, returns, etc. You can send these directly to my e-mail address (DSpaulding@SpauldingGrp.com). If we feel the question should be responded to in the blog or our newsletter, we will do this, and the questioner will be sent a response directly, too.
Hope this sounds reasonable. As always, your thoughts are invited.
Monday, November 2, 2009
Risk periodicity revisited
Our monthly newsletter went out last week, and we immediately received inquiries and comments about one of the topics: risk. I extended my recent blog remarks on this subject to shed further light on it, but there's still confusion and a need for greater clarity.
What are the potential time periods we could choose for risk statistics? Well, being realistic we have years, quarters, months, and days.
One of the key aspects of any risk measurement is to have a big enough sample to make the results valuable. Many risk statistics assume that the return distribution is normal. And while many have found that this is an invalid assumption, the basic rule as to the expected quantity of inputs still generally holds: 30. Most firms, I believe, will use 36 months, although many obviously use more or less, but for now let's assume we're going to use 36.
Okay, so let's consider again our options: Years. Is it realistic to expect many money management firms to have 36 years of returns? And, even if they did, would there be a lot of value in reviewing them in trying to assess risk? Probably not. I don't know about you, but the Dave Spaulding of 36 years ago is quite different than today's model, and the same can probably be said for many firms, along with their portfolio managers and research staffs. Looking at a 36-year period might prove of interest but of not a lot of value when it comes to risk assessment.
Let's try quarters: 36 quarters equals nine years. Many firms can support this. We could derive the risk for a rolling 36-quarter basis, yes? But do people think in these terms? I wouldn't rule this out, but doubt if it would be very popular.
Next we have months. We only need three years to come up with 36 months. This is achievable by many firms and provides recent enough information to provide greater confidence that the management hasn't changed too much in this time. We start to see "noise" appearing a bit more here, though. Noise, as our newsletter points out, can refer to a few things, including inaccuracies which often exist in daily valuations and the excessive volatility which might appear, which is often smoothed out over a monthly basis. While one might still sense its presence, it isn't as sharp with months as it is with days. Think about some of the huge shifts we've seen in the market on a daily basis; by the time we get to month-end, they've often been counterbalanced or offset by huge swings going the other way. Is there value for the investor to include such movements in their analysis?
For daily all we need is about two months of management to have 36 events, so this should be easy for everyone save for the firm that just hung their shutter. A concern with daily is that we may be looking too close at the numbers, after all, aren't investors supposed to have long term horizons? Can we be thinking long term if we're staring at days? Granted, I look at the market throughout the day myself, but I also have to confess that doing so can cause a certain degree of anxiety. The market often reacts to big swings from day to day, where some investors see big positive moves as opportunities for short-term profit, while some see big drops as chances to get issues they're interested in at a bargain price. The fact that a 150+ up movement is followed by a 200+ point down movement reflects activity that will cause large volatility numbers but probably doesn't' help a lot for risk evaluation. The chance of error creeping in is much greater with daily data. Partly because most firms don't reconcile daily, they reconcile monthly. Even benchmark providers won't often correct errors on daily data (they may not correct it on end-of-month data, either, but we at least hope that they would).
One must also take into consideration comparability. Morningstar, for example, uses monthly data. And while they shouldn't be considered the "last word in periodicity," they are arguably using an approach that they have found has the greatest value. The GIPS (r) Executive Committee has decided to require a 36-month standard deviation effective January 2011. And, I believe that most firms employ months in their calculations.
An interesting argument is to tie the report's periodicity to the frequency of reports: e.g., if you meet with a client quarterly, use quarterly periods. There may be a variety of reasons for quarterly sessions; to think that this means we want quarterly periods is, I think, a stretch. One could easily confirm the client's wishes here. If they DO want quarterly, then fine, provide it. But often they are looking to the manager to be the "expert" on the frequency to employ.
But, at the end of the day, (as it happens to be as I complete this note), firms can choose whatever measure they feel best meets their needs. But beware: you can't compare a 36 year period of risk statistics if you used years, months, and days ... try it to confirm this statement. Or, for that matter a 3 year period where you used different periods ... not comparable. Sorry.
What are the potential time periods we could choose for risk statistics? Well, being realistic we have years, quarters, months, and days.
One of the key aspects of any risk measurement is to have a big enough sample to make the results valuable. Many risk statistics assume that the return distribution is normal. And while many have found that this is an invalid assumption, the basic rule as to the expected quantity of inputs still generally holds: 30. Most firms, I believe, will use 36 months, although many obviously use more or less, but for now let's assume we're going to use 36.
Okay, so let's consider again our options: Years. Is it realistic to expect many money management firms to have 36 years of returns? And, even if they did, would there be a lot of value in reviewing them in trying to assess risk? Probably not. I don't know about you, but the Dave Spaulding of 36 years ago is quite different than today's model, and the same can probably be said for many firms, along with their portfolio managers and research staffs. Looking at a 36-year period might prove of interest but of not a lot of value when it comes to risk assessment.
Let's try quarters: 36 quarters equals nine years. Many firms can support this. We could derive the risk for a rolling 36-quarter basis, yes? But do people think in these terms? I wouldn't rule this out, but doubt if it would be very popular.
Next we have months. We only need three years to come up with 36 months. This is achievable by many firms and provides recent enough information to provide greater confidence that the management hasn't changed too much in this time. We start to see "noise" appearing a bit more here, though. Noise, as our newsletter points out, can refer to a few things, including inaccuracies which often exist in daily valuations and the excessive volatility which might appear, which is often smoothed out over a monthly basis. While one might still sense its presence, it isn't as sharp with months as it is with days. Think about some of the huge shifts we've seen in the market on a daily basis; by the time we get to month-end, they've often been counterbalanced or offset by huge swings going the other way. Is there value for the investor to include such movements in their analysis?
For daily all we need is about two months of management to have 36 events, so this should be easy for everyone save for the firm that just hung their shutter. A concern with daily is that we may be looking too close at the numbers, after all, aren't investors supposed to have long term horizons? Can we be thinking long term if we're staring at days? Granted, I look at the market throughout the day myself, but I also have to confess that doing so can cause a certain degree of anxiety. The market often reacts to big swings from day to day, where some investors see big positive moves as opportunities for short-term profit, while some see big drops as chances to get issues they're interested in at a bargain price. The fact that a 150+ up movement is followed by a 200+ point down movement reflects activity that will cause large volatility numbers but probably doesn't' help a lot for risk evaluation. The chance of error creeping in is much greater with daily data. Partly because most firms don't reconcile daily, they reconcile monthly. Even benchmark providers won't often correct errors on daily data (they may not correct it on end-of-month data, either, but we at least hope that they would).
One must also take into consideration comparability. Morningstar, for example, uses monthly data. And while they shouldn't be considered the "last word in periodicity," they are arguably using an approach that they have found has the greatest value. The GIPS (r) Executive Committee has decided to require a 36-month standard deviation effective January 2011. And, I believe that most firms employ months in their calculations.
An interesting argument is to tie the report's periodicity to the frequency of reports: e.g., if you meet with a client quarterly, use quarterly periods. There may be a variety of reasons for quarterly sessions; to think that this means we want quarterly periods is, I think, a stretch. One could easily confirm the client's wishes here. If they DO want quarterly, then fine, provide it. But often they are looking to the manager to be the "expert" on the frequency to employ.
But, at the end of the day, (as it happens to be as I complete this note), firms can choose whatever measure they feel best meets their needs. But beware: you can't compare a 36 year period of risk statistics if you used years, months, and days ... try it to confirm this statement. Or, for that matter a 3 year period where you used different periods ... not comparable. Sorry.
Thursday, October 29, 2009
Announcing a webinar to address the GIPS changes occurring in 2010
For the past year much of the focus regarding the GIPS(R) standards has been on "GIPS 2010," the next edition of the Global Investment Performance Standards, to be published in early 2010 (thus the "2010" in the name) and to go into effect in January 2011. But, we must not lose site of what is about to occur this coming January, most of which is part of "Gold GIPS," published in 2005 (i.e., the current version of GIPS).
We've touch on these items in this Blog as well as our firm's monthly newsletter. Of late we've received various inquiries regarding some of the specifics and so decided to host a webinar dedicated to this topic. And while you'll notice the webinar details elsewhere on the Blog, I thought it fitting to spend a moment to explain what we'll cover.
John Simpson and I will go over all the changes. We will spend additional time on the carve-out change, which will prohibit the use of cash allocation going forward, and the new Error Correction Guidance, which requires firms to have a policy dealing with this topic. Time will be available for questions.
This program will be on Friday, November 20, from 12:00 (noon) to 2:00 PM, EST. As with all of our webinars, it will be free to our verification clients and members of the Performance Measurement Forum. For others, a nominal fee will be charged. This is not only an educational event, it's a way to ensure you're prepared for this next round of changes.
To learn more or to sign up, please contact Patrick Fowler (732-873-5700; PFowler@SpauldingGrp.com).
We've touch on these items in this Blog as well as our firm's monthly newsletter. Of late we've received various inquiries regarding some of the specifics and so decided to host a webinar dedicated to this topic. And while you'll notice the webinar details elsewhere on the Blog, I thought it fitting to spend a moment to explain what we'll cover.
John Simpson and I will go over all the changes. We will spend additional time on the carve-out change, which will prohibit the use of cash allocation going forward, and the new Error Correction Guidance, which requires firms to have a policy dealing with this topic. Time will be available for questions.
This program will be on Friday, November 20, from 12:00 (noon) to 2:00 PM, EST. As with all of our webinars, it will be free to our verification clients and members of the Performance Measurement Forum. For others, a nominal fee will be charged. This is not only an educational event, it's a way to ensure you're prepared for this next round of changes.
To learn more or to sign up, please contact Patrick Fowler (732-873-5700; PFowler@SpauldingGrp.com).
Wednesday, October 28, 2009
Celebrating 20 years
This won't be the only time you'll read about this, but it may be the first: The Spaulding Group has entered its 20th year in business! Why 20 years should be any more special than 19 or 21 is obviously because society tends to associate extra attention to dates ending with zero or five, so we'll do that, too! And, you can imagine that we're quite proud to have achieved this milestone.
Our firm has grown in several ways over the past two decades and we are grateful for the many folks who have permitted us to serve them during this time. Many of these individuals have allowed our relationship to go beyond a business one, such that we can look upon them as friends. We are also appreciative of the many firms and individuals that support us. I am particularly thankful for our staff and management team.
During the past 20 years we've weathered two market downturns, with the more recent one only now coming to a close. We, as well as just about everyone else, are hoping that 2010 is much better than 2009 has been. And, we look forward to continue to serve our clients and the investment industry for many years to come.
Our firm has grown in several ways over the past two decades and we are grateful for the many folks who have permitted us to serve them during this time. Many of these individuals have allowed our relationship to go beyond a business one, such that we can look upon them as friends. We are also appreciative of the many firms and individuals that support us. I am particularly thankful for our staff and management team.
During the past 20 years we've weathered two market downturns, with the more recent one only now coming to a close. We, as well as just about everyone else, are hoping that 2010 is much better than 2009 has been. And, we look forward to continue to serve our clients and the investment industry for many years to come.
Tuesday, October 27, 2009
Should the IRR be net or gross of fee?
I was asked this question yesterday and thought it worthy of comment.
First, to clarify, "net" means after the fee is removed, while "gross" means before the fee is deducted.
There are generally two reasons we show IRR. First, in cases where the client controls the cash flows we show it to provide the client with THEIR return; that is, the return that takes into consideration not only the manager's performance but also the impact of the client's cash flow decisions. Why wouldn't we want to show "net"? This would truly reflect how they are doing, after (a) the impact of the manager's decisions, (b) the impact of their cash flow decisions, and (c) the impact of the advisory fee. So I'd say use "net."
The second occasion would be when the manager controls cash flows. As noted in yesterday's blog, I argue for IRR whenever this is the case, not just for private equity managers. Here we're showing the impact of the manager's entire range of decisions, from their management of the portfolio to their cash flow timing decisions. Net would reflect the entire impact and so this would generally be the better return, I would say. However, when the manager is providing their performance to prospects then both gross and net would be ideal. So in these cases, I'd say show both.
First, to clarify, "net" means after the fee is removed, while "gross" means before the fee is deducted.
There are generally two reasons we show IRR. First, in cases where the client controls the cash flows we show it to provide the client with THEIR return; that is, the return that takes into consideration not only the manager's performance but also the impact of the client's cash flow decisions. Why wouldn't we want to show "net"? This would truly reflect how they are doing, after (a) the impact of the manager's decisions, (b) the impact of their cash flow decisions, and (c) the impact of the advisory fee. So I'd say use "net."
The second occasion would be when the manager controls cash flows. As noted in yesterday's blog, I argue for IRR whenever this is the case, not just for private equity managers. Here we're showing the impact of the manager's entire range of decisions, from their management of the portfolio to their cash flow timing decisions. Net would reflect the entire impact and so this would generally be the better return, I would say. However, when the manager is providing their performance to prospects then both gross and net would be ideal. So in these cases, I'd say show both.
Subscribe to:
Posts (Atom)