Recall that effective 1 January 2010, GIPS (r) compliant firms must revalue portfolios for large cash flows. This may be an issue, a costly one, for fixed income managers who today only value bonds weekly or monthly. I specifically asked, during last week's GIPS conference, whether this rule will apply, and it will. While I'm not surprised, I could have seen some flexibility, given what happened with real estate last year.
Direct real estate managers now have to value quarterly. But this rule seemed to soften a bit, in that they can carry over the same value from the prior quarter ... at least that's my understanding. And so, if a fixed income manager has a policy whereby they value weekly, they may argue that the market isn't liquid and therefore the value stays roughly the same week-to-week (or at least, day-to-day); also, since many of the assets are matrix priced, they don't expect to see much of a change.
However, the word is that firms will have to revalue whenever a flow occurs; translation: figure you'll have to value daily. This means added costs. Further clarity on this may come, but I'd plan on more frequent revaluations unless we here something "official" to the contrary.
Wednesday, September 30, 2009
Writing ... trying to improve
If you were to speak with virtually ANY of my English teachers you'd quickly learn how much I didn't like being in class...I was a "math" guy early on and developed a firm belief that my general dislike for all things relating to writing was attributable to an irrefutable fact that there exists an inverse correlation between math and writing: that is, you're either good at one or the other, but not both. (talk about a run on sentence!!!) It wasn't long after I left school that I realized that one HAS to write, and so I grudgingly gave in. Over time, I found that I liked to write. And, I realized that to become better, I had to (a) read regularly and ( b) read about writing.
It's probably no surprise that in the blogosphere there are many who devote their time and effort to improve the writing of others. One such site is managed by Susan Weiner, who also happens to write in the investment space. I encourage you to visit her site and sign up so that you learn of new postings that you will no doubt find of interest. Whether you write regularly or not, you're sure to learn a few things.
It's probably no surprise that in the blogosphere there are many who devote their time and effort to improve the writing of others. One such site is managed by Susan Weiner, who also happens to write in the investment space. I encourage you to visit her site and sign up so that you learn of new postings that you will no doubt find of interest. Whether you write regularly or not, you're sure to learn a few things.
Tuesday, September 29, 2009
Dealing with prospects below the minimum
Recall that the GIPS 2010 "exposure draft" proposed a requirement that firms be prohibited from giving a GIPS (R) composite presentation to prospects below their stated minimum. While this was characterized as a change from a "recommendation," this appeared to be much more given the language as it currently appears in the standards (see paragraph 3.B.3). Putting this semantic disagreement aside, the GIPS Executive Committee responded favorably to the overwhelming opposition to this proposed change (my count had more than half saying "no," with but a few saying "yes").
During last week's annual GIPS conference we were told that this will stay a "recommendation." Given the wording in the draft, it isn't clear to me if this means that the EXISTING wording will stay (that is, recommending that compliant firms "not market" to prospects below) or if it'll change to the previously proposed change, but with "must not" replaced with a recommendation (that is, to not SHOW a presentation to a prospect below the minimum). I'm fine either way, and suspect many, if not most, of those who opposed the original change will be fine with this, too.
During last week's annual GIPS conference we were told that this will stay a "recommendation." Given the wording in the draft, it isn't clear to me if this means that the EXISTING wording will stay (that is, recommending that compliant firms "not market" to prospects below) or if it'll change to the previously proposed change, but with "must not" replaced with a recommendation (that is, to not SHOW a presentation to a prospect below the minimum). I'm fine either way, and suspect many, if not most, of those who opposed the original change will be fine with this, too.
Monday, September 28, 2009
Terminology: Plan Sponsor
Last week I had an "a ha" moment when during a meeting, discussion turned to the meaning of the term "plan sponsor." I always considered it a broad expression, that refers to institutional clients (e.g., pension funds, endowments, foundations). But at least one of the participants restricted its meaning to pension funds. Interestingly, a pension fund representative at the meeting shared my view!
Well, I have concluded that the term is a tad ambiguous, though the formal definition DOES seem to limit it to pension funds. I did a quick "Google search" and found two definitions:
"An employer who sets up a pension plan."
(http://www.investorwords.com/3711/plan_sponsor.html)
"While some plan sponsors will take matters into their own hands and handle all the investment decisions for retirement plans, most of them outsource the fiduciary management of the assets in the plan to one or more third parties. This way, multiple investment options run by different money managers may be offered to suit various risk profiles among the company's employees."
(http://www.investopedia.com/terms/p/plansponsor.asp)
It is evident that I'm not alone and even among those who know what the term is intended to mean, they often use it in a broader context, too. And so we can add this term to the growing list of words and expressions that often have multiple meanings.
Well, I have concluded that the term is a tad ambiguous, though the formal definition DOES seem to limit it to pension funds. I did a quick "Google search" and found two definitions:
"An employer who sets up a pension plan."
(http://www.investorwords.com/3711/plan_sponsor.html)
"While some plan sponsors will take matters into their own hands and handle all the investment decisions for retirement plans, most of them outsource the fiduciary management of the assets in the plan to one or more third parties. This way, multiple investment options run by different money managers may be offered to suit various risk profiles among the company's employees."
(http://www.investopedia.com/terms/p/plansponsor.asp)
It is evident that I'm not alone and even among those who know what the term is intended to mean, they often use it in a broader context, too. And so we can add this term to the growing list of words and expressions that often have multiple meanings.
Annual distribution to clients
One proposed change to the GIPS(r) standards that garnered a great deal of attention was to recommend that compliant firms annually send the appropriate composite presentation(s) to their existing clients. While this was merely a recommendation, many folks opposed it. In fact, by my count roughly one-third took the trouble to say "no," while no one said "yes" to this item.
The GIPS EC decided to move forward with this change. There appears to have been a belief that some of the opposition stemmed from the prior practice or suggestion that "recommendations" often become "requirements." This is no longer the case.
Because recommendations mean "best practice," only time will tell whether firms are virtually forced into delivering these reports. It will be interesting to see what costs result from this move.
The GIPS EC decided to move forward with this change. There appears to have been a belief that some of the opposition stemmed from the prior practice or suggestion that "recommendations" often become "requirements." This is no longer the case.
Because recommendations mean "best practice," only time will tell whether firms are virtually forced into delivering these reports. It will be interesting to see what costs result from this move.
Friday, September 25, 2009
Annualized standard deviation ...yes!
Okay, so the decision has been made: effective January 2011, GIPS compliant firms must report a 36-month annualized standard deviation, on an annual basis (that is, for all years starting with 2011). Further clarity is in order.
First, is standard deviation risk? There is hesitation to call it that, because a lot of folks don't consider it risk. But if it's not risk, why show it? Granted, not everyone thinks of volatility as being a risk measure, but most firms report that they use standard deviation as a risk measure. If volatility isn't risk, then is volatility such a valuable measure that we need to see it reported?
I think it's a mistake NOT to call standard deviation risk: the fact that not everyone agrees shouldn't be a reason not to. There is disagreement about much of the standards, but that doesn't stop these items from being included. It's even more confusing not to call standard deviation risk. Is someone going to be offended if we call it "risk"? I think not.
Is the Sharpe ratio a risk measure? Technically it's a risk-adjusted return. And, what risk measure is used to adjust the return? Yes, you're right: standard deviation. But if standard deviation isn't risk, then I guess the Sharpe ratio can't be a risk-adjusted measure. Who's going to tell Bill?
Okay, and so HOW do we calculate standard deviation? First, use 36 months ... not days, not quarters, not years: months! You will also be required to include the annualized return for each 36 month period. What if you don't have 36 months' of composite returns? Then don't show this until you do (well, actually, you arguably can show a standard deviation for the period you have, but you're not required to until you reach 36 months).
Do we divide by "n" or "n-1" (where "n" is the number of months (i.e., 36))? No decision has been made yet, though it appears from comments at this week's conference that "n" might win out. We use "n" for the population and "n-1" for a sample; some might argue that it would be wrong to use "n," while others would argue that it's wrong to uses "n-1." This is debatable and controversial, no doubt. And, no doubt more details will follow.
First, is standard deviation risk? There is hesitation to call it that, because a lot of folks don't consider it risk. But if it's not risk, why show it? Granted, not everyone thinks of volatility as being a risk measure, but most firms report that they use standard deviation as a risk measure. If volatility isn't risk, then is volatility such a valuable measure that we need to see it reported?
I think it's a mistake NOT to call standard deviation risk: the fact that not everyone agrees shouldn't be a reason not to. There is disagreement about much of the standards, but that doesn't stop these items from being included. It's even more confusing not to call standard deviation risk. Is someone going to be offended if we call it "risk"? I think not.
Is the Sharpe ratio a risk measure? Technically it's a risk-adjusted return. And, what risk measure is used to adjust the return? Yes, you're right: standard deviation. But if standard deviation isn't risk, then I guess the Sharpe ratio can't be a risk-adjusted measure. Who's going to tell Bill?
Okay, and so HOW do we calculate standard deviation? First, use 36 months ... not days, not quarters, not years: months! You will also be required to include the annualized return for each 36 month period. What if you don't have 36 months' of composite returns? Then don't show this until you do (well, actually, you arguably can show a standard deviation for the period you have, but you're not required to until you reach 36 months).
Do we divide by "n" or "n-1" (where "n" is the number of months (i.e., 36))? No decision has been made yet, though it appears from comments at this week's conference that "n" might win out. We use "n" for the population and "n-1" for a sample; some might argue that it would be wrong to use "n," while others would argue that it's wrong to uses "n-1." This is debatable and controversial, no doubt. And, no doubt more details will follow.
Thursday, September 24, 2009
Oops!
One great benefit from conferences is the questions that are posed ... they sometimes identify things that weren't previously considered.
Effective 1 January 2010, firms must revalue portfolios for large cash flows. Okay, fine. AND, unless you revalue for ALL flows, you can't then revalue for SOME that fall below what you define as large. Example: you use Modified Dietz on a monthly basis; your definition of large is 10%; you get an 8% flow in, you cannot revalue. Okay, clear enough.
What happens if you have a composite, that is made up of mutual funds and separate accounts, where the funds are valued daily and the funds use monthly Modified Dietz? Looks like we have a problem :-(
Clearly, some guidance is needed. I would favor an amendment whereby in these cases, where you have some funds that you value daily and some that you value monthly, then the "large" rule will only apply to those that are valued monthly. There may be some holes here ... we'll have to see.
Effective 1 January 2010, firms must revalue portfolios for large cash flows. Okay, fine. AND, unless you revalue for ALL flows, you can't then revalue for SOME that fall below what you define as large. Example: you use Modified Dietz on a monthly basis; your definition of large is 10%; you get an 8% flow in, you cannot revalue. Okay, clear enough.
What happens if you have a composite, that is made up of mutual funds and separate accounts, where the funds are valued daily and the funds use monthly Modified Dietz? Looks like we have a problem :-(
Clearly, some guidance is needed. I would favor an amendment whereby in these cases, where you have some funds that you value daily and some that you value monthly, then the "large" rule will only apply to those that are valued monthly. There may be some holes here ... we'll have to see.
Verification verifies compliance ... not!
During this week's GIPS(R) conference we learned that the GIPS executive committee was wrestling with the term "verification," which has been part of these standards since 1999 and part of the prior AIMR-PPS(R) since '93. The problem is that many don't understand what the term means and presume it verifies compliance. For some time I've had my own campaign to try to clear this up, and am pleased that the EC is going to do something about it.
Recall that the GIPS 2010 proposal draft suggested that the claim of compliance include a reference to the firm's status regarding verification: not verified, verified, verified but stale. I, as well as many others, preferred a simple "yes" or "no" approach, and the EC has decided to go that route. Hurrah! But you will also be required to include a definition of verification, which will indicate that it doesn't verify compliance or the composite's accuracy, but rather deals with the firm's policies and procedures as well as composite construction. These are good changes which should enhance your presentation.
A new change, not part of the original draft, will allow firms to indicate that a composite's been examined. This raises the importance of examinations, which some might feel is inappropriate. The idea is that examinations check the integrity of the information shown, so there is some interest and value in this. It will be interesting to see if as a result we see an increase in examinations, especially in Europe where these are rarely done.
Recall that the GIPS 2010 proposal draft suggested that the claim of compliance include a reference to the firm's status regarding verification: not verified, verified, verified but stale. I, as well as many others, preferred a simple "yes" or "no" approach, and the EC has decided to go that route. Hurrah! But you will also be required to include a definition of verification, which will indicate that it doesn't verify compliance or the composite's accuracy, but rather deals with the firm's policies and procedures as well as composite construction. These are good changes which should enhance your presentation.
A new change, not part of the original draft, will allow firms to indicate that a composite's been examined. This raises the importance of examinations, which some might feel is inappropriate. The idea is that examinations check the integrity of the information shown, so there is some interest and value in this. It will be interesting to see if as a result we see an increase in examinations, especially in Europe where these are rarely done.
Wednesday, September 23, 2009
Ratings, smatings
"When will they ever learn, when will they ever learn." Okay, so words from an anti-war song perhaps aren't the most fitting, but what the heck is going on?
Moody's is being accused of continuing to inflate ratings, this time by a former ratings analyst? Give me a break! Okay, so we know we have to be suspect of the SEC, a few GIPS Verifiers, and occasionally some accounting firms, but we'd hope that the ratings folks would have "gotten the word"! What good are risk models when the ratings are flawed? A sad commentary, unfortunately.
Moody's is being accused of continuing to inflate ratings, this time by a former ratings analyst? Give me a break! Okay, so we know we have to be suspect of the SEC, a few GIPS Verifiers, and occasionally some accounting firms, but we'd hope that the ratings folks would have "gotten the word"! What good are risk models when the ratings are flawed? A sad commentary, unfortunately.
Changes, changes, changes ... You win some, you lose some
The long anticipated announcement of the changes to the GIPS(R) standards is beginning to occur. At yesterday's opening session of this year's annual GIPS Conference, we learned of some of what is in store for us: these changes will occur in January 2011. Space doesn't permit me to go into much detail, but I'll highlight some of the changes over the next few days; more details will be provided in our monthly newsletter.
A key point has to do with WHEN the changes take place: what does January 2011 mean? It means that as you report any information in your presentations for periods after January 1, 2011, you must comply with the new requirements. If, for example, you only report annual information, then technically you won't have to include the changes until early 2012 when you will presumably be reporting the 2011 information. This means that the GIPS 2010 changes go into effect in 2011 but may not actually show up until 2012 ... confused?
We can't expect that all of what we want will occur, and so while I'm pleased with some of the changes I'm not with all, but I guess that's okay. Reviewing the 100+ letters that were submitted and deciding what to include is a daunting task that few would envy.
Today I'll only address one change: the planned requirement that firms record for 12 months any time they had corrected material errors in their presentations. This was generally not welcome by the investment community and the GIPS Executive Committee heard this "loud and clear," and so dropped it. BUT, the Error & Correction Guidance Statement still stands, for now, which makes this a requirement effective 1 January 2010! So, what does this mean? Unclear right now: we understand that some changes may occur to the "GS," but until they are you should be prepared to make such announcements in your materials.
A key point has to do with WHEN the changes take place: what does January 2011 mean? It means that as you report any information in your presentations for periods after January 1, 2011, you must comply with the new requirements. If, for example, you only report annual information, then technically you won't have to include the changes until early 2012 when you will presumably be reporting the 2011 information. This means that the GIPS 2010 changes go into effect in 2011 but may not actually show up until 2012 ... confused?
We can't expect that all of what we want will occur, and so while I'm pleased with some of the changes I'm not with all, but I guess that's okay. Reviewing the 100+ letters that were submitted and deciding what to include is a daunting task that few would envy.
Today I'll only address one change: the planned requirement that firms record for 12 months any time they had corrected material errors in their presentations. This was generally not welcome by the investment community and the GIPS Executive Committee heard this "loud and clear," and so dropped it. BUT, the Error & Correction Guidance Statement still stands, for now, which makes this a requirement effective 1 January 2010! So, what does this mean? Unclear right now: we understand that some changes may occur to the "GS," but until they are you should be prepared to make such announcements in your materials.
Monday, September 21, 2009
GIPS Annual Conference
The GIPS Annual Conference is this week in Boston. Today, I'm teaching a class on the GIPS(r) standards for the CFA Institute, and then will attend a meeting of the USIPC, the group that represents the U.S. for GIPS.
As for teaching these classes, I've been doing this for more than 10 years and enjoy it quite a bit. I was pleased to have been chosen and to have been able to continue to do this for so long. It's kind of funny that I often end up teaching some of our competitors, but that's okay.
Last week, the GIPS Executive Committee held a meeting in Singapore where they began to make some decisions on what will go into GIPS 2010. We expect to learn of these items this week and we'll include a summary in our September newsletter. In addition, I may include some comments in our Blog.
If you have any questions, please let me know!
As for teaching these classes, I've been doing this for more than 10 years and enjoy it quite a bit. I was pleased to have been chosen and to have been able to continue to do this for so long. It's kind of funny that I often end up teaching some of our competitors, but that's okay.
Last week, the GIPS Executive Committee held a meeting in Singapore where they began to make some decisions on what will go into GIPS 2010. We expect to learn of these items this week and we'll include a summary in our September newsletter. In addition, I may include some comments in our Blog.
If you have any questions, please let me know!
Saturday, September 19, 2009
Words and their meanings
"Definitions would be good things
if we did not use words to make them"
J. Rousseau
if we did not use words to make them"
J. Rousseau
Our industry is replete with words, expressions, and abbreviations that conjure up different meanings. My son, Chris, and I were on the phone with a prospective consulting client who said he'd arrange for us to meet the firm's CIO. Knowing that the person we were speaking with was from the "IT" (information technology) side of the business, I wanted to confirm what he meant by CIO: Chief Investment Officer or Chief Information Officer. As I suspected, it was the latter.
When we use the term "alpha," what do we mean? While some may be hesitant to ask, I generally do (though often the user of the word isn't sure themselves). It can mean excess return, as in Portfolio Return minus Benchmark Return, or Jensen's alpha, which is a arguably a risk-adjusted return measure which takes into consideration the portfolio's beta (thus the excess return beyond what would have been predicted solely by beta).
Even "excess return" can be confusing, as some have used the term to mean, as just noted, Portfolio Return minus Benchmark Return, while others use it to mean Portfolio Return minus Risk-adjusted Return, which also goes by the name of Equity Risk Premium.
It is therefore helpful to specify the stipulative definition of the word or expression, which means the definition that specifies the meaning assigned by the person using it. Sometimes we may not know that there might be ambiguity, but there often is. I applaud the CFA Institutes's inclusion of a glossary in the GIPS(r) standards as a way to clarify. As a past member of the committee that worked on the current version I'm quite familiar with the challenge of trying to decide if we use "should" or "must," for example. And when dealing with individuals whose first language isn't English, one must be careful to avoid even more confusion.
But don't be afraid: when in doubt, ask!
Note: source for quote & "stipulative definition" concept: Measurement, Design and Analysis, by Pedhazur & Schmelkin
Friday, September 18, 2009
Risk without consequences
President Obama spoke this week regarding the improving mood on Wall Street and offered that “Those on Wall Street cannot resume taking risks without regard for consequences, and expect that next time, American taxpayers will be there to break their fall.” I'm not quite sure what this means. Risk always comes with consequences. Granted, having the American taxpayer bail them out isn't necessarily something one should take for granted, but if the United States is going to continue to grow, risks must be taken, yes?
In the current issue of The New York Times, Andrew Sorkin speaks about risk and offered that "Perhaps the greatest measure of risk — and in this context, let’s define it as systemic risk to the entire system — is one word: leverage." Perhaps I'm thinking too semantically, but I wouldn't characterize "leverage" as a "measure of risk," let alone the "greatest measure of risk."
And while I challenge some of what Sorkin offers, his thoughts on VaR are, as the Brits say, "spot on": "VAR, by the way, is a horrible way to measure risk, as has been said again and again by economists, because it calculates the risk for only 99 percent of the time. As Mr. Johnson [a professor at MIT's Sloan School of Management] says, “VAR misses everything that matters when it matters.” Indeed, the VAR metrics obviously missed what led to what now has been dubbed the Great Recession."
It's good to see that VaR continues to be scrutinized, even while it continues to be measured and reported: I guess some folks that any number is better than no number, but understanding the number, what it means, and its reliability are probably critically important aspects of any risk assessment. I will be addressing the pros and cons of risk in an upcoming NYSSA journal article.
In the current issue of The New York Times, Andrew Sorkin speaks about risk and offered that "Perhaps the greatest measure of risk — and in this context, let’s define it as systemic risk to the entire system — is one word: leverage." Perhaps I'm thinking too semantically, but I wouldn't characterize "leverage" as a "measure of risk," let alone the "greatest measure of risk."
And while I challenge some of what Sorkin offers, his thoughts on VaR are, as the Brits say, "spot on": "VAR, by the way, is a horrible way to measure risk, as has been said again and again by economists, because it calculates the risk for only 99 percent of the time. As Mr. Johnson [a professor at MIT's Sloan School of Management] says, “VAR misses everything that matters when it matters.” Indeed, the VAR metrics obviously missed what led to what now has been dubbed the Great Recession."
It's good to see that VaR continues to be scrutinized, even while it continues to be measured and reported: I guess some folks that any number is better than no number, but understanding the number, what it means, and its reliability are probably critically important aspects of any risk assessment. I will be addressing the pros and cons of risk in an upcoming NYSSA journal article.
Thursday, September 17, 2009
Total return vs. NAV return
Just got a question from a client that I think is worthy of posting.
First, is "total return" the "industry standard." Before answering this, what IS "total return?" Total return simply means that the return includes income. It used to be fairly common practice to show income return, principal return, and total return. This is less common today. The GIPS(R) standards used to state that "total return, with income, is required," which was essentially redundant, since total return automatically includes income...this wording has changed! (hurrah!)
Yes, I'd say that "total return" IS the "industry standard."
Second, what about the NAV return? Is this a valid alternative? Well, the NAV return IS, first of all time-weighted and secondly includes income, since the NAV includes income, and therefore the NAV return is an equivalent approach.
First, is "total return" the "industry standard." Before answering this, what IS "total return?" Total return simply means that the return includes income. It used to be fairly common practice to show income return, principal return, and total return. This is less common today. The GIPS(R) standards used to state that "total return, with income, is required," which was essentially redundant, since total return automatically includes income...this wording has changed! (hurrah!)
Yes, I'd say that "total return" IS the "industry standard."
Second, what about the NAV return? Is this a valid alternative? Well, the NAV return IS, first of all time-weighted and secondly includes income, since the NAV includes income, and therefore the NAV return is an equivalent approach.
Wednesday, September 16, 2009
A little hyperbole goes a long way
I very much enjoyed Nassim Taleb's The Black Swan, and so it's probably no surprise that I'm also enjoying Pablo Triana' Lecturing Birds on Flying: Can Mathematical Theories Destroy the Financial Markets? Taleb even provides the foreword to this book.
Both books question the use of models for risk and the concept of ex ante measures. Although I'm not far along in Triana's book, yet, it's evident that he very much takes to task many of the models, such as the famed Black-Scholes-Merton option pricing model and Value at Risk. He looks specifically at our most recent market downturn, pointing to the failure of these models to provide their users with any advance notice, so that they could avoid the losses that ensued.
Taleb is not known for his shyness; consequently, his fairly long foreword is replete with much negative commentary on the academic community and the failure of models; even worse, how the models themselves may have contributed to problems that arose. Interestingly, he actually offers a disagreement with Triana on one minor point, which is a tad unusual in forewords, which usually sing the praises of the book for which they were written.
Taleb's suggestion that we encourage our friends to "resign from the various associations, such as the Internal Association of Financial Engineers and the CFA Institute" may be a tad excessive, even for him. Our alleged need to "shame members, humiliate them" is, I believe, inappropriate, though, I would encourage you to read this book and acquaint yourself with the arguments that Triana offers.
As one who often challenges conventional wisdom, it's no surprise that I made this purchase. My questioning of the value of Value at Risk at our 2007 Performance Measurement, Attribution and Risk (PMAR) conference was just one of many such examples. As I move forward through this book I will provide additional insights.
Note: I recently penned another article for the NYSSA journal on VaR, which will appear shortly. I will provide a link once it's done. A follow up article is also planned, so this subject will be getting a lot of attention from me in the coming months.
Both books question the use of models for risk and the concept of ex ante measures. Although I'm not far along in Triana's book, yet, it's evident that he very much takes to task many of the models, such as the famed Black-Scholes-Merton option pricing model and Value at Risk. He looks specifically at our most recent market downturn, pointing to the failure of these models to provide their users with any advance notice, so that they could avoid the losses that ensued.
Taleb is not known for his shyness; consequently, his fairly long foreword is replete with much negative commentary on the academic community and the failure of models; even worse, how the models themselves may have contributed to problems that arose. Interestingly, he actually offers a disagreement with Triana on one minor point, which is a tad unusual in forewords, which usually sing the praises of the book for which they were written.
Taleb's suggestion that we encourage our friends to "resign from the various associations, such as the Internal Association of Financial Engineers and the CFA Institute" may be a tad excessive, even for him. Our alleged need to "shame members, humiliate them" is, I believe, inappropriate, though, I would encourage you to read this book and acquaint yourself with the arguments that Triana offers.
As one who often challenges conventional wisdom, it's no surprise that I made this purchase. My questioning of the value of Value at Risk at our 2007 Performance Measurement, Attribution and Risk (PMAR) conference was just one of many such examples. As I move forward through this book I will provide additional insights.
Note: I recently penned another article for the NYSSA journal on VaR, which will appear shortly. I will provide a link once it's done. A follow up article is also planned, so this subject will be getting a lot of attention from me in the coming months.
Monday, September 14, 2009
Why I don't like mid-day cash flows
I'm reviewing a client's calculation process and learned that they previously employed a mid-day approach: note that they calculate returns daily but simply assumed that the flows occurred mid-day. I have never liked mid-day for cash flows. Why?
As you may recall, GIPS(R) mandates that effective 1 January 2010 compliant firms revalue portfolios at the time of large cash flows ... who is going to revalue their portfolio mid-day? Firms revalue at the start and/or end of the day, but not mid-day. Using the start or end of day approach for cash flows yields an EXACT return. However, if you use the mid-day approach, you're actually using an approximation approach to derive your return. One might argue that mid-day is good if you can't decide between start and end ... sorry, but I don't buy it.
Most firms use either start or end of day weighting for cash flows: to me this is the way to go. In fact, we're seeing an increasing number move to start for in-flows and end for outflows, which seems to work best.
As you may recall, GIPS(R) mandates that effective 1 January 2010 compliant firms revalue portfolios at the time of large cash flows ... who is going to revalue their portfolio mid-day? Firms revalue at the start and/or end of the day, but not mid-day. Using the start or end of day approach for cash flows yields an EXACT return. However, if you use the mid-day approach, you're actually using an approximation approach to derive your return. One might argue that mid-day is good if you can't decide between start and end ... sorry, but I don't buy it.
Most firms use either start or end of day weighting for cash flows: to me this is the way to go. In fact, we're seeing an increasing number move to start for in-flows and end for outflows, which seems to work best.
Friday, September 11, 2009
September 11 ... eight years later
I think it's fitting, on this eighth anniversary of 9/11, to pause and reflect on that horrible day. There are a few events in our lives that we recall a lot of details about: some are especially wonderful, such as births and weddings, while others are tragic and so stunning that they make a lasting impression upon us. Many of us, for example, recall where we were when JFK died. Well, 9/11 is another such event.
Oddly, we still hear how some believe that the whole episode was created by our federal government, or how the Bush administration knew it was going to occur but didn't take action because they thought it would be a way to take us into war. Such conspiracy thinkers degrade the very fabric of our nation. It's unfortunate that leading up to this anniversary was the release of the man behind the Lockerbie crash that resulted in the death of 259 innocent people -- another tragedy that impacted countless lives.
I will forever be touched by the events of 9/11. While watching it unfold from a hotel room in London, I felt that surely this must be a dream for such a thing couldn't really happen, could it? My son, Chris, and I were in downtown NYC recently and road the PATH train from Jersey City to the WTC station. The train always pauses as it approaches the stop, which allows me to say a silent prayer for those victims. To me, this space will forever be hallowed ground.
Tonight, my wife and I will participate in an annual community remembrance at our town's 9/11 memorial. Like many NJ towns, ours actively solicited contributions from individuals and businesses to formally mark this event. It's something we're quite proud of.
As on that day eight years ago, I offer a prayer for the victims, their family, and our nation.
Oddly, we still hear how some believe that the whole episode was created by our federal government, or how the Bush administration knew it was going to occur but didn't take action because they thought it would be a way to take us into war. Such conspiracy thinkers degrade the very fabric of our nation. It's unfortunate that leading up to this anniversary was the release of the man behind the Lockerbie crash that resulted in the death of 259 innocent people -- another tragedy that impacted countless lives.
I will forever be touched by the events of 9/11. While watching it unfold from a hotel room in London, I felt that surely this must be a dream for such a thing couldn't really happen, could it? My son, Chris, and I were in downtown NYC recently and road the PATH train from Jersey City to the WTC station. The train always pauses as it approaches the stop, which allows me to say a silent prayer for those victims. To me, this space will forever be hallowed ground.
Tonight, my wife and I will participate in an annual community remembrance at our town's 9/11 memorial. Like many NJ towns, ours actively solicited contributions from individuals and businesses to formally mark this event. It's something we're quite proud of.
As on that day eight years ago, I offer a prayer for the victims, their family, and our nation.
Thursday, September 10, 2009
Science: fact or fiction
I understand that there is no universally agreed upon definition of what science is, and that's probably a good thing. But I suspect most people believe that science results in facts. We sometimes hear something being "art and not science, suggesting that art is perhaps a bit more "squishy," where we may be less likely to agree upon single answers, whereas science should result in objective results that yield universal agreement.
Yesterday, my son Chris and I were returning from a trip to Delaware when I asked him whether glass was a solid or liquid. He immediately responded "solid," whereupon I told him that when I was in my high school physics class some 40+ years ago I was told that glass is a liquid because it "flows." Well, he immediately turned to his trusty Blackberry to discover that this previously oft cited belief was wrong and that glass is classified as a solid. Okay, so science 40 years ago said "liquid," while today it's "solid." What happened? Definitions, rules, research?
I then mentioned that when I was in school Pluto was the 9th planet, but now is no longer considered a planet at all. Again, his Blackberry provided a conflicting answer, stating that although there was consideration to remove Pluto from the ranks of planets, the decision apparently was reversed.
And so it looks like science is becoming more art itself!
If scientists can change their minds about glass, Pluto, and much more, shouldn't we in performance measurement be prepared to change our minds, too? Unfortunately, many hold certain rules as almost sacrosanct, and refuse to consider that perhaps long standing traditions (such as the universal application of time-weighting) cannot be reconsidered. Fortunately, there are enough of us pushing for change that we are seeing some movement ... slow, yes, but movement nonetheless.
p.s., did you know that yesterday, at 9 seconds after 9 minutes after 9 o'clock, it was 09:09:09 09/09/09? Won't happen again for 100 years.
Yesterday, my son Chris and I were returning from a trip to Delaware when I asked him whether glass was a solid or liquid. He immediately responded "solid," whereupon I told him that when I was in my high school physics class some 40+ years ago I was told that glass is a liquid because it "flows." Well, he immediately turned to his trusty Blackberry to discover that this previously oft cited belief was wrong and that glass is classified as a solid. Okay, so science 40 years ago said "liquid," while today it's "solid." What happened? Definitions, rules, research?
I then mentioned that when I was in school Pluto was the 9th planet, but now is no longer considered a planet at all. Again, his Blackberry provided a conflicting answer, stating that although there was consideration to remove Pluto from the ranks of planets, the decision apparently was reversed.
And so it looks like science is becoming more art itself!
If scientists can change their minds about glass, Pluto, and much more, shouldn't we in performance measurement be prepared to change our minds, too? Unfortunately, many hold certain rules as almost sacrosanct, and refuse to consider that perhaps long standing traditions (such as the universal application of time-weighting) cannot be reconsidered. Fortunately, there are enough of us pushing for change that we are seeing some movement ... slow, yes, but movement nonetheless.
p.s., did you know that yesterday, at 9 seconds after 9 minutes after 9 o'clock, it was 09:09:09 09/09/09? Won't happen again for 100 years.
Tuesday, September 8, 2009
"New" attribution effects
From an equity perspective most practitioners are familiar with the allocation, selection, and interaction effects, while from a fixed income perspective they're familiar with the treasury (aka duration or yield curve management), spread, income, parallel, non-parallel, twist, shift, and again, selection effects. But there are other effects which are sometimes considered.
What happens, for example, when your index prices securities differently than you do? If, for example, you have a bond priced at 99.25 while the index has it at 99.00? If we ignore these differences, then your selection effect might look better when in reality the only thing you did is price your bonds higher (and obviously the opposite can occur, too). And so, what do you do? Well, you could reprice the index with your prices, but then you'd end up with a different result for the index. Or, you could reprice your portfolio with the index's prices, but now your market value and return would change. An alternative would be to break out this difference and report it as a "price effect." While the math behind this hasn't been clearly stated, and there are no doubt various approaches, it's something to consider and explore with your software vendor or development team, especially if you're in the world of bonds or global equities, where there is a greater possibility of having pricing differences.
Okay, so what if you decide to invest in a sector that isn't in the benchmark? How do you handle this? If you use a standard "Brinson" model (i.e., the Brinson, Hood, Beebower or Brinson, Fachler) as it's written, the results may be somewhat nonsensical. There are ways to adjust these models so that the results are better, by incorporating the sector you're in into the index. Personally I prefer breaking this out completely and showing the results as "off" or "out of benchmark results," to give greater emphasis to the fact that you decided to invest in a sector which isn't in the benchmark. There are alternatives here, too, as to how you'd show these results.
Some like to see a "trading effect," which captures your trading activity during he period. One approach to derive this is to calculate your effects using both a holdings and transaction based approach, with the difference being the trading effect. There again are probably other approaches.
Attribution remains a dynamic area. That's one reason we conduct our one day symposium each Fall to specifically deal with this important topic. To learn more please contact Patrick Fowler (PFowler@SpauldingGrp.com) or Chris Spaulding (CSpaulding@SpauldingGrp.com).
What happens, for example, when your index prices securities differently than you do? If, for example, you have a bond priced at 99.25 while the index has it at 99.00? If we ignore these differences, then your selection effect might look better when in reality the only thing you did is price your bonds higher (and obviously the opposite can occur, too). And so, what do you do? Well, you could reprice the index with your prices, but then you'd end up with a different result for the index. Or, you could reprice your portfolio with the index's prices, but now your market value and return would change. An alternative would be to break out this difference and report it as a "price effect." While the math behind this hasn't been clearly stated, and there are no doubt various approaches, it's something to consider and explore with your software vendor or development team, especially if you're in the world of bonds or global equities, where there is a greater possibility of having pricing differences.
Okay, so what if you decide to invest in a sector that isn't in the benchmark? How do you handle this? If you use a standard "Brinson" model (i.e., the Brinson, Hood, Beebower or Brinson, Fachler) as it's written, the results may be somewhat nonsensical. There are ways to adjust these models so that the results are better, by incorporating the sector you're in into the index. Personally I prefer breaking this out completely and showing the results as "off" or "out of benchmark results," to give greater emphasis to the fact that you decided to invest in a sector which isn't in the benchmark. There are alternatives here, too, as to how you'd show these results.
Some like to see a "trading effect," which captures your trading activity during he period. One approach to derive this is to calculate your effects using both a holdings and transaction based approach, with the difference being the trading effect. There again are probably other approaches.
Attribution remains a dynamic area. That's one reason we conduct our one day symposium each Fall to specifically deal with this important topic. To learn more please contact Patrick Fowler (PFowler@SpauldingGrp.com) or Chris Spaulding (CSpaulding@SpauldingGrp.com).
Saturday, September 5, 2009
The SEC & Madoff
This weekend's WSJ provides even more details on how the SEC failed to uncover Madoff's antics, in spite of the numerous warnings that were provided to them. One especially nice addition is that they point us to the SEC's report, appropriately titled "Investigation of Failure of the SEC to Uncover Bernard Madoff's Ponzi Scheme." This 477 page document will no doubt be a "must read" for many in our industry: some light reading for the Labor Day weekend perhaps?
Friday, September 4, 2009
Why is risk SO hard?
You might begin by asking "who said risk is hard?" Well, for starters, I did! And why would I say such a horrid thing?
Let's begin with a simple question: what is risk? That is, how would YOU define risk? The answers tend to vary, but include "volatility," "uncertainty," and "potential loss." Volatility, as a definition, dates back to the late 1950s, if not earlier. Uncertainty was taken up as long ago as the early part of the 20th century. Most dictionaries define risk as the possibility of losing something. The most commonly cited definition today, however, relates to the inability to meet an objective. Space doesn't permit me to go into much more detail on these alternative definitions right now, but we may take this topic up at greater length in our newsletter. But you should be able to right away see that we have at least four very different views on what risk is! How's that for a way to make this topic complicated before we've even gotten into the measurements?
Okay, so let's discuss the measurements for a bit. Most measure volatility, which appears to be the most criticized definition. Let's say that you agree that the inability to meet your objective is the best definition -- okay, so how does standard deviation, tracking error, or beta relate to this definition? To paraphrase the CIO of Yale University's endowment, David Swensen, quantitative measures of risk have a lot to be desired.
Let's consider one of the assumptions that many risk measures have: that returns are normally distributed. There's been countless research efforts done that have confirmed that this isn't the case; what does that therefore say about these measures?
And next, consider the issue of ex ante risk ... that is, forward looking risk. Most such measures base their predictions on what has occurred in the past. But depending on how you view the past and how far back you go, you can get very different results. And, who's to say that the past is a good predictor of the future? What in our past would have predicted the most recent shock we had to the market and economy?
Therefore, I stand by my original statement that risk is hard. Quite a challenge. But, a critically important part of our business. In spite of its difficulties, we have to do the best we can to get our arms around the risks that have been taken as well as the risks we're taking now. A single risk measure won't do: we need as many as we can get to try to understand the risks that are being taken.
Thursday, September 3, 2009
Say it aint so, Joe...
The more we hear, the more shocked we become: I responded to the initial revelations that the SEC had failed to properly check out Bernie Madoff based on the following:
As the WSJ reported today, “The Securities and Exchange Commission botched numerous opportunities to uncover Bernard Madoff's Ponzi scheme, in part because of an inexperienced staff and delays in examinations, said an SEC inspector general report.” In other words, there was NO excuse for the SEC's failure to properly investigate what had been reported.
Sad but probably true, had it not been for last year's shock to the market, Bernie would probably still be at work stealing his clients' money.
- SEC is told that Bernie's numbers are too good to be true
- SEC's response: "come on, Bernie? Bernie's been around for years. He's a stand up guy. Anyone but Bernie! With limited resources we can only focus on serious, legitimate complaints. Why waste our time!"
As the WSJ reported today, “The Securities and Exchange Commission botched numerous opportunities to uncover Bernard Madoff's Ponzi scheme, in part because of an inexperienced staff and delays in examinations, said an SEC inspector general report.” In other words, there was NO excuse for the SEC's failure to properly investigate what had been reported.
Sad but probably true, had it not been for last year's shock to the market, Bernie would probably still be at work stealing his clients' money.
Wednesday, September 2, 2009
Global outreach
A few weeks ago I began monitoring traffic on the blog. And while I can't tell who is logging in, I can tell where they're connecting from. And so far we've been accessed from at least 23 countries:
United States, Switzerland, United Kingdom, Canada, Brazil, Australia, India, Germany, South Korea, Mexico, Philippines, South Africa, Sweden, Pakistan, Singapore, Guernsey, Bermuda, Japan, El Salvador, United Arab Emirates, Israel, Bangladesh, Thailand.
Not bad! Interesting group too, yes? Glad we're drawing so much attention.
Who can you trust?
The comic Dilbert has recently had a few strips that very much relate to our industry:
Back to yesterday's Blog: "Lessons learned." Surely we should be learning a lesson about the need for proper reviews; we're seeing way to many cases of them not being done, and yet prospects and customers rely upon the results. In the last year billions of dollars have been lost which might otherwise not have been: the notion of an "ounce of prevention" surely applies here, yes?
p.s., Question: How do you know when a politician is lying? Answer: His lips are moving. Couldn't help but think of that when I read of UK Prime Minster Gordon Brown's duplicity regarding the release of the Lockerbie bomber. So, who can you trust? As Reagan put it, trust but verify.
- Yesterday's, for example dealt with establishing industry standards off of Dilbert's company's specs (knowing how good his company is, we can only imagine what such a standard would look like)
- On August 22 Dogbert struck on an idea to start 10 mutual funds with randomly selected stocks; they would market the one that did the best. (Sound familiar?)
- Arthur Andersen's questionable audits of some of their clients' books (think Enron)
- the SEC's failure to respond to suggestions that Bernie Madoff's returns were most likely contrived
- concerns with rating agencies who apparently didn't see all the risks in some of the companies they reviewed
- a report earlier this year in Fundfire about a verifier who verified a hedge fund that turned out to be a mini-Madoff (i.e., a Ponzi scheme).
Back to yesterday's Blog: "Lessons learned." Surely we should be learning a lesson about the need for proper reviews; we're seeing way to many cases of them not being done, and yet prospects and customers rely upon the results. In the last year billions of dollars have been lost which might otherwise not have been: the notion of an "ounce of prevention" surely applies here, yes?
p.s., Question: How do you know when a politician is lying? Answer: His lips are moving. Couldn't help but think of that when I read of UK Prime Minster Gordon Brown's duplicity regarding the release of the Lockerbie bomber. So, who can you trust? As Reagan put it, trust but verify.
Tuesday, September 1, 2009
Lessons learned
"Financial crisis of past year offers many lessons," reads the CFA Institute's lead story in today's e-mail news brief. The story is from The Wall Street Journal and points out how "we are nearly six months into one of the most impressive bull markets in memory," with the DJIA up 46% and the NASDAQ up 60% since early March. While some trepidation remains, many seem relieved that the worst appears to be over. Good timing, perhaps, because many firms will soon begin their budget planning for 2010.
The past year has provided us with the opportunity to discover how our formulas behave when markets are really down ... so far down that they caused 36-month (and longer) cumulative returns to be negative, which in turn resulted in some occasional strange looking results. Both the Sharpe and Information ratios seem to be in error, as they appear opposite of what we might expect. As a result, there are a few of us who are beginning to pursue some suggested rules as to how one might address this situation, should it occur again. This might be one of the lessons we learn from this crisis.
The past year has provided us with the opportunity to discover how our formulas behave when markets are really down ... so far down that they caused 36-month (and longer) cumulative returns to be negative, which in turn resulted in some occasional strange looking results. Both the Sharpe and Information ratios seem to be in error, as they appear opposite of what we might expect. As a result, there are a few of us who are beginning to pursue some suggested rules as to how one might address this situation, should it occur again. This might be one of the lessons we learn from this crisis.
Subscribe to:
Posts (Atom)