Monday, November 30, 2009

Liquidity risk

In a recent comment (see "Waltzing through the blogospher," November 28, 2009) Steve Campisi wrote about the need to measure liquidity risk, citing the difficulties that the Yale Endowment fund had. It just so happens that this month's Institutional Investor's cover story deals with the huge drop in assets major colleges have seen in their endowments, effective the fiscal year ending this past June 30. The drops have been quite staggering, with the average loss being roughly 20 percent.

I'm intrigued by the notion of liquidity risk, but I can see huge challenges with it, too. How does one properly assess this risk, especially when the variables that can impact it can be quite significant. It was probably a lot easier selling your Dubai World bonds a few weeks ago than it is today, right? During a flight to quality liquidity dries up. If you don't have to sell an asset that is down, you can perhaps afford to wait around to see if it recovers its value. But, there are times when you must sell, thus you encounter liquidity risk and lower prices. Long-Term Capital Management WAS able to recover the value of just about all of their assets...the problem was they couldn't afford to hang around and had to be bailed out. Perhaps stress testing your portfolio would be one way to determine what your risk is.

More details will help, and we hope to see this topic explored in greater detail.

Calculating returns ... the wrong way

We got a call recently from a firm that wants us to review their method to derive returns. They take their beginning market value plus cash flows to determine an average capital base; they then determine their account's appreciation for the period and calculate a simple average to derive the return. Extremely intuitive, yes?

But sadly wrong, too. (Hopefully you agree).

This isn't the first time we've encountered firms who employ a proprietary approach to derive returns. Not long ago during a conference lunch I was sitting next to an attendee who told me of the approach they had developed. Again, quite intuitive ... one that surely no one would find objectionable. Unfortunately, it, too, was invalid.

Long ago, before the publishing of performance books and various standards, individuals were often compelled to figure out a return methodology on their own. Fortunately, this is no longer the case. So, if you have such a formula in your shop, perhaps you should have it checked out.

Saturday, November 28, 2009

Canopy Financial ... another scam discovered too late


We are just learning of yet another firm that scammed millions from investors: Canopy Financial. They forged audit reports from KPMG, who apparently WASN'T their auditor. We haven't yet heard whether they also claimed GIPS(R) compliance,  so have no way of knowing whether their alleged chicanery extended quite this far.

Canopy had been featured in BusinessWeek and managed to be ranked #12 on Inc's 500 List.

Even though GIPS verifications aren't "designed to detect fraud," they can provide an extra level of confidence in the manager's credibility. We aren't advocating the verification be extended to include fraud detection, but believe that verifiers can serve as an extra level of scrutiny.

Who would have thought to call KPMG to check to ensure that they had, in fact, audited Canopy? Some additional checking may be necessary going forward, especially with certain firms, to help catch these guys quicker. Unfortunately, we can't control shameful behavior, but we have to figure out ways to catch it. In this case, their investment bank actually helped them raises tens of millions of dollars. Is someone asleep at the wheel? Unclear at this time, but we're sure more will be forthcoming. While this doesn't rise to the level of a Bernie Madoff scam, many were no doubt impacted by it.

Waltzing through the blogosphere

I guess it's not surprising that as a blogger, I occasionally wonder around looking at other blogs ... I regularly visit about a dozen and am always looking for new ones to add to my list. Today I've visited several new sites and have picked up a few ideas.

As far as I know it, my blog remains unique in its focus, addressing topics such as the GIPS(R) standards, rates of return, performance attribution, and risk on a regular basis.

To date we've attracted some 500 visitors, which I guess is pretty good as a start. Our newsletter has several thousand subscribers, so perhaps we have a bit to go to get to that level. Visitors have come from every continent and many different countries. And while only a few have signed up as "friends" so far, we know that many circle back regularly. The idea of being a "friend" is that you are notified when a new post is added.

Photo by chrismar

I'm pleased that we've had a few comments regarding posts, as this suggests that some agree while others not with what has been offered.

I've now been blogging for six months and am open to your ideas, thoughts, suggestions. Feel free to contact me directly (DSpaulding@SpauldingGrp.com) or by posting a comment. Thanks!

Friday, November 27, 2009

Rates of return ... come again?

I stumbled upon a website today that provided the following brief explanation about returns:

"To evaluate the performance of a portfolio manager, you measure average portfolio returns. A rate of return (ROR) is a percentage that reflects the appreciation or depreciation in the value of a portfolio or asset"

We measure "average" returns? I don't think so. Average returns have been shown to have zero value. A classic example: Year 1 +100 return, Year 2 - 50% return, average= (100 - 50) / 2 = 25 percent. Now, let's use some dollars: start with $100; at the end of year 1 you're at $200, then at the end of year 2 you're at $100, meaning zero percent return.

In addition, while there are times when a return will reflect the appreciation or depreciation, once we introduce cash flows, forget about it! Recall that time-weighting can yield funny situations, like having a positive return but losing money.

I guess the lesson is: be careful about what you read on the Internet ... may not always be correct.

Thursday, November 26, 2009

Happy Thanksgiving!!!

While we in the United States celebrate Thanksgiving today, everyone should no doubt be able to reflect on the things they give thanks for. My list is so very long. It includes our management and staff, our clients, our vendors, and colleagues. My wife and family are blessings in my life. The fact that I'm fortunate to live in a country that affords us so many freedoms is also something to give thanks for.

The fact that our firm managed to weather this severe market downturn offers us something to give thanks for. We know that this past year has been a challenging one for just about everyone, with many who suffered greatly. We're seeing a turnaround and look forward to much improvement in the coming year.

Perhaps this year I'm most thankful for the birth of our grandson, Brady, who will turn four months old this coming Tuesday.

We wish you and your family a blessed and wonderful Thanksgiving.

Wednesday, November 25, 2009

No fieldwork necessary

I am putting the finishing touches on this month's newsletter, and am including comments regarding the revelation that certain GIPS(R) verifiers feel that it's not necessary for them to conduct "fieldwork." That is, they feel that they can conduct a thorough verification from the comfort of their offices.

In our verification advertising piece we state that we don't conduct "remote verifications." Sorry, but we're not quite that good. We feel we NEED to be in our clients' offices to review their records, investigate issues that might arise, and engage in spontaneous dialogue, when necessary. We also enjoy meeting with our clients face-to-face, in order to enhance our relationship with them. We consider our clients friends, and look forward to these visits.

The quality of verifications has been questioned for as long as the work has been done. We wanted a process to verify the verifiers back in 1992. We at least need some guidance regarding this matter: hopefully it will be forthcoming.

Monday, November 23, 2009

More precise returns ... why did it take SO long???

I'm in Toronto for a couple days teaching a performance class for a client. And, as often happens, a thought occurred that seemed interesting.

Peter Dietz introduced the "original Dietz formula" in 1966, which treats cash flows as occurring mid-period; this was adjusted several years later to day-weight the flows (what has become known as the "Modified Dietz Formula"). Why didn't he introduce this earlier?

Also, in 1968 the Bank Administration Institute pointed out that the Exact Method (where we revalue portfolios whenever flows occur) is the best approach to time-weighting. And so why didn't they encourage firms to use it sooner?

In both cases the answer is the same: technology. In 1966 and 1968 we didn't have personal computers, spreadsheets, or even calculators. No doubt many folks calculated returns by hand (math by hand? perish the thought!). To do a mid-point method is pretty simple by hand but to add day-weighting could complicate the process quite a bit.

And to use the Exact method would have required access to daily prices: in the mid '60s? Right! Forget about it!

Makes sense, yes?

Friday, November 20, 2009

Classics reviewed!

We're pleased to announce that our new publication, Classics in Investment Performance Measurement, was reviewed by Jerry Tempelman, CFA, in the November issue of the CFA Institute and CIPM program's Investment Performance Measurement Newsletter.

The book has been quite a success and has been well received by the industry. We're very pleased and grateful for Jerry's review and its appearance in the newsletter.

If you'd like more information about the book contact Patrick Fowler (PFowler@SpauldingGrp.com; 732-873-5700) or visit our webstore.

Thursday, November 19, 2009

TIA III has arrived

The Spaulding Group, Inc. is hosting its 3rd annual Trends in Attribution Conference. This year's program is at the Heldrich Hotel in New Brunswick, NJ. Our turnout is much better than one might have expected, given the economy ... we have more folks here this year than in 2008!!!

Our sponsors for this year's event are:
  • The CIPM Program
  • DST Global
  • Eagle Investment Systems
  • RIMES
  • SS&C
  • StatPro
  • Wilshire Analytics.
If you're joining us ... great! If you couldn't make this year's program, you can purchase a copy of the audio (contact Patrick Fowler (PFowler@SpauldingGrp.com) for details.

Wednesday, November 18, 2009

Attribution ... without securities?!?!?!?

At last week's Performance Measurement Forum meeting in Rome we briefly discussed the issue of "pricing effects." That is, the effect that can arise when your portfolio's prices don't match what's in the index. Recall that we discussed this on November 4.

What I failed to mention earlier is this: what happens if you don't have the index's constituents? For example, what happens if your bond index doesn't give you the securities and the details (such as prices) they include?

A: you obviously won't know whether or not there IS a pricing effect.
B: if there is one, you're out of luck! Unless you can persuade the index provider to give up these details, you can't report on the effect. MEANING that your selection effect will be less than accurate. How "less than"? We just won't know. Sorry :-(

Can you still have attribution if you're missing security details? YES, of course! As long as you have the market values and weights for sectors or subsectors or other groupings you're interested in, you can run attribution! :-)

Tuesday, November 17, 2009

What about money weighting?

I got an e-mail from a retail client this week. That is, a retail client, whose rep works for one of our brokerage clients. This hadn't happened before.

This individual's rep had passed him one of the issues of our newsletter, to explain how they calculate their returns. This apparently didn't satisfy the end client, who sought clarity from me.

He indicated that he found the way returns are calculated by his broker too confusing, and advocated a "money weighted" approach: music to my ears! He went to the trouble to show me the IRR formula, which I thought was kind of funny. In my response I indicated that I've often commented favorably about the IRR and money-weighting, and suggested he review other issues of the newsletter (which is no easy task, given that we're now in our 7th year, and so have quite a lot of issues out there (though we do provide summaries of each on the website)).

I have found that when you show clients (institutional or retail) money-weighted returns, they feel that the returns are much more meaningful. Granted, time-weighting has its place, and shouldn't be replaced by money-weighting to represent how the manager did (save for private equity managers). Our crusade to get more firms to adopt money-weighting continues to gain new followers.

Monday, November 16, 2009

Value at risk ... "where's the value?"

At last week's Performance Measurement Forum meeting in Rome I mentioned how during this most recent economic crisis the Value at Risk metric has demonstrated how little value it provides: what firm's use of this measure provided them with any degree of accuracy? And yet the measure clearly has its supporters.

I am wrapping up an article for the New York Society of Security Analysts' (NYSSA) journal on this topic: specifically the benefits and shortcomings of VaR. In a nutshell, the measure on the surface seems like an excellent one as it offers a very intuitive view of a portfolio's risk: "the most you can lose is $5 million..." Simple. Easy to grasp. And, it's a forward looking measure as opposed to one that says what the risk was. How better to report risk? There are, of course, two other bits of information that go along with such a report: "...over the next week at a 95% confidence level."

The addition of the time period only enhances VaR's value. BUT, the confidence level can be a bit confusing. What does it mean? Well first, the 95% shows us that this is the worst possible loss that can occur within this range; the missing 5% means that it can, in reality, be worse ... perhaps a lot worse.

I won't go into more detail on this topic here; you can "read all about it" when my article is published. Suffice it to say, the VaR critics have been having a field day since the most recent market downturn hit a year ago.

Friday, November 13, 2009

Does this help??? Perhaps, but it's still confusing.

Today at our European Forum meeting we learned that a Q&A has been issued regarding the Error & Correction Guidance Statement. Recall that this GS includes a requirement to report material errors in a presentation for a period of 12 months; this was a major change to the initial draft. The GIPS 2010 disclosure draft included this provision and it was soundly criticized by the public, and was therefore withdrawn. Consequently, we have a GS which goes into effect in 1 1/2 months and a standard that won't include it.

The Q&A is available on the GIPS website, and reads as follows:

The GIPS Guidance Statement on Error Correction states that firms must disclose in a compliant presentation any changes resulting from a material error for at least 12 months following the correction of the presentation. Does this mean that we have to disclose that a material error occurred to prospective clients that we know have not received the erroneous presentation? Firms are not required to disclose the material error in a compliant presentation that is provided to prospective clients that did not receive the erroneous presentation. However, for a minimum of 12 months following the correction of the presentation, if the firm is not able determine if a particular prospective client has received the materially erroneous presentation, then the prospective client must receive the corrected presentation containing disclosure of the material error. This may result in the preparation of two versions of the corrected compliant presentation to be used for a minimum of 12 months following the correction of the presentation.

This is a major change to what's in the GS and for now it doesn't appear that the GS is going to be revised. Consequently, firms have to be aware of this non-subtle change.

On the GIPS website there's also a list of what's been decided so far, that includes the following as it relates to this matter:

Error Correction – The EC decided to remove the requirement to disclose for 12 months any changes in a compliant presentation resulting from a material error. This requirement was drawn from the Error Correction Guidance Statement which goes into effect on 1 January 2010. The EC stated that it is not the intent to force firms to disclose errors to parties that never received the erroneous presentation. The EC committed to reviewing the Error Correction Guidance Statement as soon as possible and will issue any necessary clarifications. Until such time, firms are reminded that the Error Correction Guidance Statement will become effective in its current form on 1 January 2010.

I'm pleased to see that it isn't the EC's intent to "force firms to disclose errors to parties that never received the erroneous presentation." Unfortunately, with the GS that was published and a Q&A which may not get a lot of attention, there will no doubt be a fair amount of confusion. I would hope that the GS would be withdrawn completely until it can be recrafted and reintroduced, perhaps with public comment.

Thursday, November 12, 2009

Performance Measurement Forum's Autumn 2009 Meeting: Rome

Patrick Fowler and I are in Rome this week for the Autumn session for the European chapter of the group. This is the second time we've held the meeting in this beautiful and historic city (the first time was when Italy was still using lira) and the third in Italy (we held a meeting in Milan not long ago).

As always, we expect the sessions to be quite interesting and energetic. So much is going on and so much has happened since we last met in the Spring. This is the first meeting since we established the blog in June.

The Forum is a members-only group that's been in existence for 11 years. Several of our members have belonged since the beginning: in fact, our first member of the group came from Europe (we actually launched the North America chapter a bit earlier than the European one).

Because of the group's high degree of interaction, we limit the number of members. We are pleased that a couple members from the United States will join us this week. Not only is the meeting a great excuse to visit this great city, many find the subtle differences between the two regions worth investigating.

I can't go into a great deal of detail regarding what we discuss, but will share some highlights over the coming days. Ciao!

To learn more about the forum, contact Patrick Fowler at 732-873-5700 or PFowler@SpauldingGrp.com.

Wednesday, November 11, 2009

Sampling ... what does it mean?

The GIPS (r) standards allow verifiers to use sampling to conduct their reviews. This makes perfect sense ... otherwise, the costs might be prohibitive if every account, for every time period, for every composite had to be checked. Also, sampling has long been an acceptable method to test hypotheses, evaluate opinions, and conduct research.

As Pedhazur & Schmelkin point out, "Sampling permeates nearly every facet of our lives...decisions, impressions, opinions, beliefs, and the like are based on partial information...limited observations, bits and pieces of information, are generally resorted to when forming impressions and drawing conclusions about people, groups, objects, events, and other aspects of our environment." They reference Samuel Johnson who said, "You don't have to eat the whole ox to know that the meat is tough." They also wrote that "Formal sampling is a process aimed at obtaining a representative portion of some whole, thereby affording valid inferences and generalizations to it."

But what DOES sampling mean in the world of GIPS verifications? Presumably, its the selection of an adequate number of observations to yield enough information about the firm in order to allow the verifier to draw a reasonable conclusion regarding the firm's composite construction process. But what percentage is adequate? The standards offers no guidance.

Perhaps this is like the word "obscenity" and the comment former U.S. Supreme Court Justice Potter Steward famously remarked (a statement that my friend and associate Herb Chain often cited when we taught GIPS courses together): that he didn't know how to define it, but knew it when he saw it. This can be further likened to the word "materiality," which is difficult to pin down in much detail. But after a (very) brief review of opinions from other firms we quickly realized that there is some disparity regarding sampling and verification.

We are speaking with a client that has roughly 1,000 composites ... a very large number by anyone's scale, yes? What size would constitute a relevant sample? We did a "mini survey" and, perhaps not surprisingly, got a mix of responses. At the low end we have roughly 2% and at the high end 10-15 percent. We tend to lean more towards the 10-15% figure, with an expectation that we would look at the composites the firm markets, and also at additional composites which would be selected on a mix of "random" and "non-random" bases. In other words, each year we won't be looking at the same composites, but will vary many of them.

What if the verifier only looks at the firm's "marketed" composites? Some might think this makes sense, since it focuses on those composites that will most likely be presented to prospects. But if the firm knows that only these composites will be reviewed, what motivation is there to bother with the others? Such a selection is "biased," and hardly considered a fair way to evaluate a firm's compliance. A more appropriate approach would be to include "marketed" composites, but also select a random number of "non-marketed," in order to conduct a better, more conclusive and objective test.

It's important to remember that the GIPS standards do not make a distinction between "marketed" and "non-marketed" composites. In fact, the terms don't exist within the standards. Unfortunately, certain verifiers have, over the years, promoted the notion that firms need only be concerned with the "marketed" ones, in spite of being corrected on this multiple times. Such a posture only results in confusion and, unfortunately, many firms believing they're compliant when, in fact, they aren't: compliance is at the "firm" level, not at the "composite" or "marketed composite" levels.

RESOURCE:
Pedhazur, Elazar J. & Liora Pedhazur Schmelkin. Measurement, Design and Analysis. Psychology Press. 1991.

Saturday, November 7, 2009

Trends in Attribution ... near record attendance

In spite of this year's market downturn, The Spaulding Group's upcoming Trends in Attribution Symposium (TIA) will have a very good turnout. We're obviously quite pleased by this.

With roughly two weeks to go there's still time to be a part of this event. It's a single day that's dedicated to this important topic. We've assembled a great group of speakers to address a variety of issues. We will repeat our popular "Fast Attribution" session, which involves a group of panelists who touch on a host of topics in rapid succession.

To learn more, visit the conference website, contact Patrick Fowler (PFowler@SpauldingGrp.com) or Chris Spaulding (CSpaulding@SpauldingGrp.com), or call our offices (732-873-5700).

Friday, November 6, 2009

Cooking & GIPS Discretion

In explaining GIPS(R) discretion to a client, I hit upon a metaphor: cooking.

Let's say you go out to a fancy restaurant that is serving the "chef's special" that evening. It sounds quite appealing, except you'd like to alter it in some way. Perhaps instead of the fish being cooked medium, as the chef suggests, you want it rare or well done. Or perhaps you ask that a different sauce be used.

The waiter goes back to the chef with your request. The first option is that the chef refuses to do what you ask: if you won't eat it as he/she recommends, then you can go elsewhere. This is equivalent to the firm that refuses any restrictions: you take our investment strategy as we define it or find another manager.

Let's say that the chef says "fine, I will do what the customer asks, but this will not be representative of my special. Please don't suggest to others that they ask this patron how the meal tastes, because it has been altered and no longer represents either my skill or preferences. This is equivalent to a portfolio being considered "non-discretionary" for GIPS purposes.

What if the request is quite minor (instead of preparing the fish medium, please make it medium well)? The chef might be happy to do this and believe that the change is such that the customer will still benefit from his/her creativity and cooking skills. This is like a portfolio with restrictions that is deemed "discretionary" for GIPS purposes: the request is a minor one, such that the account will look very much like the other accounts in the composite.

There's one more variation. Let's say that the request is extreme enough that the meal will not represent the "Chef's special." However, what the customer has asked for sounds like a great idea. An example from my personal experience may help: one of the restaurants we frequent serves a pasta dish with shrimp; I ask that they substitute scallops. CLEARLY this has altered the meal enough that it won't represent the originally advertised item. However, the chef may decide that he/she likes this idea and adds it to the menu. This is what can happen when a client imposes a restriction that alters the account such that it won't represent the strategy, but ends up being a new product. The example I often use in our training classes is the case of "no sin stocks." Perhaps the result will not represent the strategy but might cause the firm to create a new composite, which is a variation of the first, such that the firm now has two somewhat similar composites. For example, "U.S. Equities with sin" and "U.S. Equities without sin." Okay, maybe you won't call them this, but you get the idea.

Discretion, from a GIPS perspective, can be confusing. We hope this helps!

Thursday, November 5, 2009

"New" Attribution Effects - II

Yesterday I discussed the "pricing" effect. Today I want to briefly touch on another "new" effect. By "new," I don't necessarily mean that it's a recent introduction of an effect, but it's new relative to the other standard effects that we often see reported on. I don't recall seeing anything in writing on these effects before, so hope that this will prove helpful.

This second effect is the "trading" effect, which reflects the contribution to the excess return that results from trading activity during the period. This should not be confused with the transaction cost measurement, which is a whole separate science, so to speak, that measures such things as "market impact" and "value average weighted price" (VWAP, for short) where assess how efficient the firm is in executing trades. The latter should, in theory, be part of attribution analysis, too, but I'm not aware that this commonly done today.

By "trading" effect, we are taking into consideration the trades that take place during the period versus a situation where no trading was done. Here, we essentially are comparing the results from a holdings-based model (which ignores trades) with a transaction-based model (which incorporates them into the analysis).

The way I've heard to derive this effect is to use both a holdings and transaction-based model, and take the difference: the result is the trading effect.

One might ask what the value is of this effect. Arguably, it has little value as it (in my opinion) points out the difference between the two models, where one would expect that the transaction-based results would be superior. Is it worth the time and cost to derive these results? For what purpose are they being employed? I'm aware that some folks do, in fact, calculate them; just not sure if I'd recommend doing so.

To me, the greater value would be to tie your transaction cost measurement analysis into attribution. As noted above, I don't believe this is common today, but should be pursued. Something to discuss further, no doubt.

Wednesday, November 4, 2009

"New" attribution effects - I

We recently met with a client for whom we're designing a fixed income attribution system. During our meeting the subject of the "pricing" or "price difference" effect came up. This effect identifies the impact when the portfolio and benchmark have different prices for the same security. This is more likely to happen with bonds, because (a) they're less liquid and (b) for the most part they aren't exchange traded, so we probably won't have market prices for most of them.

The conundrum firms face when they encounter different prices is how to deal with them: should they reprice the benchmark with the portfolio's prices or vice-versa? Neither of these options is very good: if you reprice the benchmark, then its return won't match what's published; if you reprice the portfolio then you're using prices which you don't feel are correct and you won't match the return that may be shown in other reports.

The "pricing effect" is a better way to deal with this as it provides visibility without altering returns. It may, however, raise questions which you'll have to be prepared to answer. And, it can only be done if you have the benchmark's constituents (if you don't, then you won't be able to identify pricing inconsistencies).

This topic deserves more detail then we can provide here, so I'll take it up in our newsletter. Stay tuned!

Tuesday, November 3, 2009

Announcing our question & answer protocol

Through this blog I recently received a question that wasn't related to a specific post. I opted not to respond because (a) I didn't know who it was from (it was sent anonymously) and (b) it didn't fit what it was tied to. I will be happy to respond to questions relating to a blog piece, whether they're sent anonymously or not.

And, I will be happy (usually) to respond to questions to non-blog-initiated topics, provided the sender identifies themselves. Feel free to ask questions regarding GIPS(R), attribution, risk measurement, returns, etc. You can send these directly to my e-mail address (DSpaulding@SpauldingGrp.com). If we feel the question should be responded to in the blog or our newsletter, we will do this, and the questioner will be sent a response directly, too.

Hope this sounds reasonable. As always, your thoughts are invited.

Monday, November 2, 2009

Risk periodicity revisited

Our monthly newsletter went out last week, and we immediately received inquiries and comments about one of the topics: risk. I extended my recent blog remarks on this subject to shed further light on it, but there's still confusion and a need for greater clarity.

What are the potential time periods we could choose for risk statistics? Well, being realistic we have years, quarters, months, and days.

One of the key aspects of any risk measurement is to have a big enough sample to make the results valuable. Many risk statistics assume that the return distribution is normal. And while many have found that this is an invalid assumption, the basic rule as to the expected quantity of inputs still generally holds: 30. Most firms, I believe, will use 36 months, although many obviously use more or less, but for now let's assume we're going to use 36.

Okay, so let's consider again our options: Years. Is it realistic to expect many money management firms to have 36 years of returns? And, even if they did, would there be a lot of value in reviewing them in trying to assess risk? Probably not. I don't know about you, but the Dave Spaulding of 36 years ago is quite different than today's model, and the same can probably be said for many firms, along with their portfolio managers and research staffs. Looking at a 36-year period might prove of interest but of not a lot of value when it comes to risk assessment.

Let's try quarters: 36 quarters equals nine years. Many firms can support this. We could derive the risk for a rolling 36-quarter basis, yes? But do people think in these terms? I wouldn't rule this out, but doubt if it would be very popular.

Next we have months. We only need three years to come up with 36 months. This is achievable by many firms and provides recent enough information to provide greater confidence that the management hasn't changed too much in this time. We start to see "noise" appearing a bit more here, though. Noise, as our newsletter points out, can refer to a few things, including inaccuracies which often exist in daily valuations and the excessive volatility which might appear, which is often smoothed out over a monthly basis. While one might still sense its presence, it isn't as sharp with months as it is with days. Think about some of the huge shifts we've seen in the market on a daily basis; by the time we get to month-end, they've often been counterbalanced or offset by huge swings going the other way. Is there value for the investor to include such movements in their analysis?

For daily all we need is about two months of management to have 36 events, so this should be easy for everyone save for the firm that just hung their shutter. A concern with daily is that we may be looking too close at the numbers, after all, aren't investors supposed to have long term horizons? Can we be thinking long term if we're staring at days? Granted, I look at the market throughout the day myself, but I also have to confess that doing so can cause a certain degree of anxiety. The market often reacts to big swings from day to day, where some investors see big positive moves as opportunities for short-term profit, while some see big drops as chances to get issues they're interested in at a bargain price. The fact that a 150+ up movement is followed by a 200+ point down movement reflects activity that will cause large volatility numbers but probably doesn't' help a lot for risk evaluation. The chance of error creeping in is much greater with daily data. Partly because most firms don't reconcile daily, they reconcile monthly. Even benchmark providers won't often correct errors on daily data (they may not correct it on end-of-month data, either, but we at least hope that they would).

One must also take into consideration comparability. Morningstar, for example, uses monthly data. And while they shouldn't be considered the "last word in periodicity," they are arguably using an approach that they have found has the greatest value. The GIPS (r) Executive Committee has decided to require a 36-month standard deviation effective January 2011. And, I believe that most firms employ months in their calculations.

An interesting argument is to tie the report's periodicity to the frequency of reports: e.g., if you meet with a client quarterly, use quarterly periods. There may be a variety of reasons for quarterly sessions; to think that this means we want quarterly periods is, I think, a stretch. One could easily confirm the client's wishes here. If they DO want quarterly, then fine, provide it. But often they are looking to the manager to be the "expert" on the frequency to employ.

But, at the end of the day, (as it happens to be as I complete this note), firms can choose whatever measure they feel best meets their needs. But beware: you can't compare a 36 year period of risk statistics if you used years, months, and days ... try it to confirm this statement. Or, for that matter a 3 year period where you used different periods ... not comparable. Sorry.