Wednesday, December 30, 2009

Performance & ethics


Calculating rates of return really isn't that difficult. And while we may debate whether returns should be calculated using money-weighting or time-weighting, such issues pale in comparison with the broader aspect of ethics.

I must confess that I initially thought that the Certificate in Investment Performance Measurement (CIPM) Program's emphasis on ethics seemed a bit excessive. And perhaps the exam questions could be geared more to the issues and situations that performance analysts and performance heads are more likely to encounter. But the issue of ethics shouldn't be ignored. We seem to be reminded of this on a fairly regular basis.

Yesterday's Wall Street Journal has an article on Raj Rajaratnam and his Galleon hedge fund. Recall that Mr. Rajaratnam has been accused of trading on inside information. His prowess at building relationships, which allegedly resulted in him gaining access to confidential information, allowed him to build a successful business and amass a fortune of more than a billion dollars.

Galleon's returns were apparently quite impressive. And one might not find any fault with the accuracy of the valuations or the return methodology employed. But what is the value of the returns if, as has been suggested, they reflect the results of illegal activity?

What is the responsibility of the performance analyst or manager, should they have reason to believe that the returns reflect illicit activity? Perhaps questions like these should be added to the CIPM program. What would YOU do, if you had reason to believe that the firm's results were arrived at through illegal means? Something to ponder, yes? 

p.s.,We are investigating another fraud case which might involve a firm that claimed compliance with the Global Investment Performance Standards (GIPS(R)). We will provide details once we've done further vetting of the information we've seen so far.

Tuesday, December 29, 2009

Active vs. Passive Voice


When commenting on someone's writing I often recommend using the active voice. A recent example may help.

A client sent us their GIPS(R) Policies & Procedures document to review. It included the following: "there are few industry-accepted standards for calculation and presentation of after-tax returns"; clearly a statement in the passive voice.  I suggested changing this to the active voice; for example, "there are few industry-accepted standards to calculate and present after-tax returns." It's shorter, more direct, and arguably reads better.

I, like most people, used to write almost exclusively in the passive voice, but eventually learned the difference and try to use active as often as possible. It's fine to include some passive voice, but most of the writing should be in the active mode. When I review client documents, such as their firm's policies to support their compliance with the Global Investment Performance Standards, this is one of the items I'll comment on. Clearly, it's up to the client to switch, but most recognize the benefits and seem to appreciate the suggestions.

In a piece for today, Susan Weiner commented at length on this topic, and provides additional resources to support one's transition to the active voice. I suggest you check it out!

Good news for 2010, as predicted by the yield curve


In a recent blog piece, Larry Kudlow discussed how the yield curve is portending a positive economic environment for 2010; here's part of what he wrote: 

"The Yield Curve is Signalling Bigger Growth 

"What’s a yield curve and why is it so important? 

"Well, the curve itself measures Treasury interest rates, by maturity, from 91-day T-bills all the way out to 30-year bonds. It’s the difference between the long rates and the short rates that tells a key story about the future of the economy.

"When the curve is wide and upward sloping, as it is today, it tells us that the economic future is good. When the curve is upside down, or inverted, with short rates above long rates, it tells us that something is amiss -- such as a credit crunch and a recession.

"The inverted curve is abnormal, the positive curve is normal. We have returned to normalcy, and then some. Right now, the difference between long and short Treasury rates is as wide as any time in history. With the Fed pumping in all that money and anchoring the short rate at zero, investors are now charging the Treasury a higher interest rate for buying its bonds. That’s as it should be. The time preference of money simply means that the investor will hold Treasury bonds for a longer period of time, but he or she is going to charge a higher rate. That is a normal risk profile.

"The yield curve may be the best single forecasting predictor there is. When it was inverted or flat for most of 2006, 2007, and the early part of 2008, it correctly predicted big trouble ahead. Right now it is forecasting a much stronger economy in 2010 than most people think possible."


Good news for investors and everyone else, too!

Monday, December 28, 2009

Standard Deviation ... a risk measure or not?


Standard deviation is a much misunderstood measure, in spite of its common use.

First, is it a risk measure? It depends on who you ask. It's evident that Nobel Laureate Bill Sharpe considers it to be one, since it serves this purpose in his eponymous risk-adjusted measure. Our firm's research has shown that it is the most commonly used risk measure.

And yet, there are many who claim that it does anything but measure risk. What's your definition of risk? If it's the inability to meet a client's objectives, how can standard deviation do this? But, for decades individuals have looked at risk simply as volatility.

As to volatility, is it a measure of volatility or variability? In an e-mail response to this writer, Bill Sharpe said that the two terms can be used in an equivalent manner.

The GIPS(R) (Global Investment Performance Standards) 2010 exposure draft includes a proposed requirement for compliant firms to report the three year annualized standard deviation, which appears to have survived the public's criticism and will be part of the rules, effective 1 January 2011. But, will it be called a "risk measure"? This remains unclear.

Interpreting standard deviation is a challenge, since the result's value will vary based on the return around which it's being measured. Example: your standard deviation is 1 percent; is this good or bad? If your average return is 20%, then to know that roughly two-thirds of the distribution falls within plus-or-minus 1% doesn't seem bad at all, but if your average return is 0.50%, then doesn't 1% sound a lot bigger? In reality, it's better to use it to compare managers or a manager with a benchmark. Better yet, as part of the Sharpe Ratio, as this brings risk and return together.

I could go on and on, but will bring this to a close. Bottom line: it's easy to calculate (if we can agree on how (didn't address this today)), in common use, and has a Nobel Prize winner's endorsement. Will it go away? Not a chance. If you're not reporting it, you probably should be.

Sunday, December 27, 2009

First ten years of the 21st century are already done?


In last week's Wall Street Journal, there was an entire section dedicated to revisiting the end of the first decade of the 21st century. But, on this one, the WSJ has it wrong: they're a year too early.

The first century (A.D.) began in year 1 and ended in year 100 (thus, the end of the first century or first 100 years). The second century began 101 and ended 200 (the second century's conclusion). Fast forward: the 19th century began in 1801 and ended in 1900 (the 19th 100 years). And, the 20th century began in 1901 and concluded in 2000, not 1999! Yes, I know that MANY, MANY people thought that they welcomed in the new millennium January 1, 2000 but they were a year early as the 21st century began January 1, 2001. Therefore, the first decade ends December 31, 2010, not December 31, 2009 as the WSJ (and unfortunately, others) are trying to persuade us to believe.

Time marches on quickly enough without our news media trying to rush it along even faster.

p.s., Think I may be wrong on when the new millennium began? Check this out (towards the bottom)! Looks pretty official to me!

Saturday, December 19, 2009

A holiday break











I decided that this week will be a break from blogging. And so, I wish all of our Christian readers a blessed and very Merry Christmas; to our Jewish readers, a somewhat belated Happy Chanukah; and to everyone, a prosperous, healthy, safe and Happy New Year! May 2010 be a wonderful one for all of us.

And finally, as Tiny Tim offered, God Bless Us, Every One!

Thursday, December 17, 2009

Inflation...an interview with John Longo, PhD

Our friend and colleague, Rutgers University professor John Longo, was recently interviewed on CNBC. We suggest you have a look & listen:

Attribution with an index that values only monthly


A client asked me how they should handle the following situation: the benchmark their client assigned to them is a blend of two indexes, one of which values only monthly (the other, daily). They calculate performance attribution daily and smooth the one index's monthly return across each day of the month (thus assuming a nice linear progression through the month). Is this okay?

No, I would say in general it isn't. And, it occurred to me that there's a better solution!

Our client calculates daily transaction-based attribution. It is my contention (shared by others) that transaction-based attribution can easily be done on a monthly basis with no loss in accuracy: there is no need to do it daily. By virtue of the transaction process, any trades that occur are picked up in the weight, which includes the beginning plus weighted flows (i.e., trades). And so, the ideal approach (I believe) is to use monthly transaction based attribution. This is especially true since they have no valid daily benchmark...the daily results are spurious, at best.

Wednesday, December 16, 2009

Interaction effect: show it or hide it?


One of the controversial topics in performance attribution has to do with interaction. This effect exists in several models, but we'll limit our discussion to its presence in the Brinson-Fachler model. Recall that there are three effects in all: allocation, selection, and interaction (the formulas are shown below).

The interaction effect represents the impact from the interacting of the allocation and selection decisions. There have been several good reasons offered why one shouldn't show interaction; when not showing it we typically change the weight in selection to the portfolio weight, meaning that the selection decision is expanded to include interaction (though this is rarely stated as such).



If we reflect on what the possible results can be with interaction we conclude the following:
  • overweighting (positive) times outperformance (positive) = positive result
  • overweighting (positive) times underperformance (negative) = negative result
  • underweighting (negative) times outperformance (positive) = negative result
  • underweighting (negative) times underperformance (negative) = positive result.
One argument for showing interaction that I posit is that to not do so means that the selection effect will be burdened with negative results when they aren't deserved (e.g., if we have outperformance but underweighting). The response might be "well, in the end it will all work out, because there are times when selection will get a positive interaction effect when it's undeserved". We used to almost universally hold this view regarding returns, thinking that the mid-point Dietz method, for example, was perfectly acceptable, because those times when we penalized the manager by assuming a late arriving flow was present for the full period would be counterbalanced at some point in the future by a benefit of an early flow only being counted for half the time. But we've wised up and now promote much more accurate methods.

During our recent Trends in Attribution (TIA) conference, one panelist pointed out the "flaw" of underweighting times underperformance: a positive result. What DOES one make of this? I think it's easy, after some recent reflection: this shows that the allocation decision was a wise one! That is, they underweighted at a time when there was underperformance (would you propose to overweight?).

In an article I wrote on this topic I proposed that if you want to "eliminate" the interaction effect, then create a "black box" to analyze the interaction effect when it shows up, and to allocate in a conscious and methodical way. I still hold to this belief and still hold to the value of the interaction effect, and oppose any arbitrary assignment to selection or allocation. Space doesn't permit much more at this time, but perhaps I'll take this up again at a later date.

Spaulding, David. "Should the Interaction Effect be Allocated? A 'Black Box' Approach to Interaction." The Journal of Performance Measurement. Spring 2008

Tuesday, December 15, 2009

Technology survey


We are in the midst of completing our Performance Measurement Technology survey and are anxiously waiting to hear from you regarding your use of technology. So please respond today!

As always:
  • your information is kept confidential
  • as a participant, you'll receive a complimentary copy of the results!
So please join in. Thanks!

How big should our performance measurement team be?


One question that occasionally surfaces is how big a performance department should be. There is no simple answer and it's difficult to decide by simply doing a comparison with other firms. Some key points:
  • all firms are different. Okay, maybe a bit of hyperbole, but there are a variety of models that can be applied, which make comparisons difficult.
  • assets under management may be a good gauge to say how much you can afford but not necessarily how big you should be.
The criteria that should be considered when deciding staffing include:
  • number of accounts: the more you have, the larger your staff has to be
  • reliance on spreadsheets: the greater the reliance, the greater the amount of manual effort and therefore the greater need for staff. Packaged software typically has many tools that aid the team, thus reducing the manual effort.
  • activities of the group: performance measurement can address returns (portfolio and  subportfolio), risk, attribution (equity, fixed income, balanced, etc.), and the Global Investment Performance Standards (GIPS(R)). If the firm is only engaged in returns, their staffing needs won't be too great; but, if they take on more tasks, more staff may be needed
  • reporting: if the group handles both internal and client reporting, staff is needed; also, if there are custom reports or if the frequency varies from daily to monthly to quarterly, more staff may be needed
  • reliance on and support from back office operations: if the operations staff is clued into the needs of performance, they may be helpful in addressing data issues; but, if they are clueless (okay, a bit strong, but you get the point) about their needs, they may not be as helpful as one might hope, meaning more work falls to the performance measurement team.
  • client demands: if the firm caters to a lot of individual needs, including custom reporting or fielding inquiries, the workload increases, meaning the need for staff.
There are no doubt other criteria to consider, and your thoughts are invited.

The answer to the amount of staff one needs is hardly a simple one. Hopefully this at least gives you some ideas.

Monday, December 14, 2009

VaR: the new benchmark for managing financial risk


I subscribe to a few Google Alerts, so that I'm aware when interesting stories are posted. One is for Value at Risk, and yesterday there was a link to a book by Philippe Jorion with the headline as noted above.

"The new benchmark for managing financial risk?"

Perhaps someone should alert Nassim Taleb of such a statement. I'd say that the jury is definitely out on the value of VaR. And to suggest that it's the "new benchmark" to manage risk is hyperbole run amok. PLEASE!

I'm thinking some dialogue is in order to discuss the various risk measures and their respective merit. I'll begin shortly with my views.

Sunday, December 13, 2009

"we shall see no more financial panics."


With the U.S. Congress's plans to tinker with our current financial structure, including a weakening, or at least shifting, of the Federal Reserve's powers, one might expect someone to utter such a line as is shown above. Actually, this was spoken by Charles S. Hamlin, one of the original seven Federal Reserve Board governors, in 1914 ("Under the Federal Reserve system we shall have no more financial panics"). It was less than ten years before our country observed a panic and we've had several since.

While we might applaud our elected officials for trying to avoid the event we're only now seeming to be coming out of, one must also wonder how much is being done simply for showmanship or to appeal to the electorate. It will also be interesting to see if all of the provisions that are planned aren't eventually overturned by the Supreme Court, given the Federal Reserve's autonomy.

Is congress overreacting? Are their actions appropriate? Unfortunately, the anti-Wall Street fervor has grown to the point where such steps aren't surprising. Hopefully, we won't suffer as a result.

Source for Hamlin quote: Moss, David. The Federal Reserve and the Banking Crisis of 1931. "Harvard Business School Cases."

Saturday, December 12, 2009

"more regulation, higher risk aversion, more aversion to math."


As I recently mentioned, occasionally I go exploring to look at other blogs. And I came across Steve Hsu's blog. And although Steve is a physics professor at the University of Oregon, he feels comfortable to opine on the investment industry. (Recall that Jose Menchero, too, was a Physics professor and now develops risk models for BARRA). He offered his views on the future, in response to a question that was posed to him.

The question "do you think now would be a good time for someone to consider beginning a career as a quant?" engendered the following from Steve: "A sea change is coming: more regulation, higher risk aversion, more aversion to math.

"On CNBC you can already hear non-mathematical industry types trying to blame it all on models.

"Nevertheless the long term trend is still to greater securitization and more complex derivatives -- i.e., more quants! But hopefully people will be much more skeptical of models and their assumptions. Personally I think it is the more mathematically sophisticated types (or, more specifically, perhaps those who come from physics and other mathematical but data driven subjects ;-) who are likely to be more skeptical about a particular model and how it can fail. People who don't understand math have to take the whole thing as a black box and can't look at individual moving parts."


If you visit Steve's blog you'll see the graphic I've included (with his permission) above. He suggests that it resembles the rather complex network of CDS (Credit Default Swap) contracts. He uses this, and the relationships that existed, to describe the systematic risk which resulted.

He introduces a concept which I found quite interesting: "some entities are too connected to fail, as opposed to too BIG to fail. Systemic risk is all about complexity."

I also have to say that I'm intrigued with his "aversion to math" statement. Finance and investing is all about math. But perhaps the complexity of some of the deals IS too extensive and perhaps needs to be toned back a bit. Will it, though? Only time will tell.

Friday, December 11, 2009

What's the standard way to handle cash flows?


Here, I'm addressing cash flows from a return calculation perspective.

First, recall that in two weeks we'll have a requirement for GIPS(R) compliant firms to revalue portfolios for large cash flows, where the firm decides what "large" represents. But what happens when the flows aren't large?

I'd guess that most firms treat all flows as start-of-day events. There are, of course, some firms that treat them as end-of-day events. My recommendation: treat inflows as start-of-day and out-flows as end-of-day. I'd like to provide you with a clear explanation and rationale for this, but I'm unable to do this. Anecdotal evidence suggests that this is the best approach. Of late we've seen many firms and vendors begin to adopt this practice. That doesn't make it "right," but it at least adds support for my position.

Regardless of what approach you take, ensure that you're consistent in its application.

p.s., I just realized I didn't answer my own question! There is NO standard on how cash flows should be handled. GIPS doesn't address this, either. The only "standard" would be to be consistent...i.e., don't use cash flow timing to "game" the process to generate higher returns.

Wednesday, December 9, 2009

Significant cash flows ... so easy to get wrong


Perhaps it's partly because "significant" and "large" can be viewed synonymously that we find confusion regarding how firms can handle these within GIPS(R). One of our clients was clearly thinking that "significant" means the same as "large" when it comes to the standards, but it doesn't. Significant flows deal with the opportunity to temporarily remove portfolios from composites in the event of large (sorry, I mean significant) cash flows...the idea being that all of a sudden you get a lot of cash and it may take time to invest it, so remove the account while you get the cash invested. On the other hand, effective 1/1/10, GIPS compliant firms will be required to revalue portfolios for large flows.

I'm at an outsourcing client who has a client who NETS cash flows during the month to determine if the account should be removed. NETS cash flows? And why do they do this??? Hopefully we'll find out, but let's think about this.

Their level to remove the account is >25% of the beginning market value. And so, we start with $1 million. On August 3 a $300,000 contribution is made. Then, on August 24, the account withdraws $250,000. Net = 300,000-250,000=5,000; 50,000/1,000,000 < 25%; therefore, don't remove the account. WHAT???

If the firm actually wishes to take advantage of the optional significant cash flow provision, they should have removed the account because of the August 3rd contribution (because presumably there's a bunch of cash they need to get invested). The fact that a few weeks later the client decided to withdraw funds has nothing to do with the earlier flow. In my opinion, they are confused!

Also, in my opinion: this is (a) wrong and (b) shouldn't be permitted by the firm's verifier.

Tuesday, December 8, 2009

Still trying to get my arms around the error correction rules


I'm still a bit befuddled by the GIPS(R) error correction guidance, which goes into effect in three weeks. I guess I'm relieved that I'm not alone, but I'd feel a lot better if it was crystal clear. The guidance provides for four levels of errors:
  1. Non-material: ["Take no action"] error is so minor that no action is needed. 
  2. Non-material: ["Correct the presentation with no disclosure of the change"] you'll (a) correct the error, but (b) not document it (i.e., disclose it in your GIPS presentation) nor (c) not tell anyone
  3. Non-material: ["Correct the compliant presentation with disclosure of the change and no distribution of the corrected presentation"] you'll (a) correct the error, (b) document it, but (c) not tell anyone
  4. Material: ["Correct the presentation with disclosure of the change and make every reasonable effort to provide a corrected presentation to all prospective clients and other parties that received the erroneous presentation"] you'll (a) correct it, (b) document it, and (c) tell anyone who got a copy of the previously erroneous copy that an error was corrected.
Recall that the disclosure draft included the requirement to document any material errors for 12 months (as is written in the guidance). Also recall that the recently issued Q&A says that you don't need to do this, at least in all cases. It reads: "Firms are not required to disclose the material error in a compliant presentation that is provided to prospective clients that did not receive the erroneous presentation. However, for a minimum of 12 months following the correction of the presentation, if the firm is not able determine if a particular prospective client has received the materially erroneous presentation, then the prospective client must receive the corrected presentation containing disclosure of the material error. This may result in the preparation of two versions of the corrected compliant presentation to be used for a minimum of 12 months following the correction of the presentation." 

At last week's Performance Measurement Forum meeting in Orlando we discussed this issue at some length. It appears that you don't have to have rules for all four cases. My advice regarding the four levels (by level):
  1. Identify the kinds of errors that you'd not bother correcting (immediately spelling and grammatical errors come to mind; you get to decide what else)
  2. Identify the level that you feel needs to be fixed but is so minor that you don't need to tell anyone (an example here might be a correction that increases your return).
  3. I don't see a need for this level. I base this on the Q&A, which basically says you don't have to document the error unless you can't determine if you gave a prospect a copy of the prior version. If this is the sole condition, since this level doesn't require redistribution, why would you document it?
  4. I'd establish the rules that would cause this to happen, but ensure that I've got records of who gets copies of what presentations, so that if a material error IS discovered, I can get them a copy of the revised presentation, if appropriate.
The big question that you need to decide on: what's material? We've heard an error as high as 100 basis points, which I happen to think is pretty darn high. I think it depends (a) on the asset class (I'd have different levels for bonds than stocks, unless you adopt an approach like I suggest below) and (b) the relative size of the returns. For example, a 100 bp difference of 2% vs. 1% is a lot different than a 51% vs. 50%, right?

I'm thinking that perhaps a relative rule might be workable. For example, a 5% error of the return itself (example: if the return was 2% and has been corrected to 1.89%, that's 11 bps, which is 11/200 = 5.5%, so it's material). Perhaps at returns > 10% I'd say the level is 10% (again, of the return itself; example: if my return was 12% but corrected to 10.78%, that's a 122 bp drop, and 122/1200 = 10.17%, so it's material).

This approach might be better than simply saying "100 bps" or "50 bps." I'm not saying to adopt these thresholds ... you decide what works for you. But perhaps this approach would provide the necessary flexibility so that the rules will make sense? Your thoughts are, as always, invited.

Value at Risk Article


The current issue of the NYSSA's (New York Society of Security Analysts) journal contains an article I wrote on Value at Risk. I invite you to view it.

It's intended as a basic introduction to how one can calculate VaR using the Variance/Covariance (aka Correlation) method, which was championed by JP Morgan's RiskMetrics, meaning it's in fairly common use. Although we haven't done any research on this, I suspect that this approach is the most used of the three.

At our recent Performance Measurement Forum meeting in Orlando a colleague volunteered to write an article for The Journal of Performance Measurement where he'll provide a broad benefits / shortcomings assessment. We've seen some harshly critical reviews done of late, so a more objective review will be welcome.

I still remain skeptical of VaR's usefulness, but am open to hearing other perspectives. Hope you are, too!

Saturday, December 5, 2009

Abbreviations & acronyms


This post has NOTHING to do with performance. It's the weekend, and I simply want to comment on the use of abbreviations and acronyms. First, I fully support their use. In fact, I favor even more use of them. Let's first consider the difference: as www.dictionary.com points out, an acronym is "a word formed from the initial letters or groups of letters of words in a set phrase or series of words." Examples: RADAR, ASAP, and WAC. In the world of investment performance we have GIPS. Acronyms are abbreviations, but not all abbreviations are acronyms. For example, "PPS" (performance presentation standards) is an abbreviation, but since it isn't a word (i.e., you can't say it; you only say the letters individually) it's not an acronym. Understand, however, that there are a LOT of folks who would say that PPS IS an acronym, and even though they're technically wrong, society seems to be loosening the strict meaning of this term. But, we won't debate this here.

I happen to be a big fan of acronyms. We named our annual performance conference in such a way that it forms an acronym (PMAR = PeeMar). Our fall event was called TIA (Spanish for Aunt, but that's merely a coincidence and has no relevance).

I spent almost five years in the Army ... the military LOVES acronyms. TRADOC = Training Doctrine Command; USAFAS = You-sa-fas = United States Army Field Artillery School. We already cited WAC, which is Women's Army Corp.

Some military acronyms have become commonly used and often misused. Take, for example, FUBAR and SNAFU. They're actually somewhat profane, though I'll use the softer translations: FUBAR = fowled up beyond all recognition; SNAFU = situation normal, all fowled up. When Bill Clinton was President he once remarked that they "had a SNAFU." I would suggest that first, the President shouldn't use such a term. Second, I don't believe it was the proper way to phrase it, though I won't be a stickler on this point. To me, any time you use a word or expression you should know what it means, otherwise you may offend (take folks who regularly use the Yiddish word "schmuck." This is NOT a nice word and shouldn't be used in mixed company...sorry).

The "word" ASAP is often used and, in my opinion, carries more weight than it's full meaning. If I tell you "I need this report ASAP" versus "I need this report as soon as possible," which sounds more urgent? I suggest the former, even though their meanings are identical.

Three abbreviations that are in common use in the military but that haven't made it into the outside world are IAW, NLT, and COB (COB isn't usually pronounced as a word "cob" but rather is treated as an abbreviation: c-o-b). IAW = in accordance with; NLT = no later than; COB = close of business. For example, "I need your report IAW my memo of July 7th NLT, COB this Friday." Army guys use this wording ALL the time ... shouldn't it fit into our writing, too?

I rarely text on my phone and know that there is a host of abbreviations that folks use to save keystrokes. I am also aware that some young people now write reports for school using these abbreviations, which is causing some concerns: students need to know how to properly write before using shorthand notation. I don't intend to adopt these shorthand expressions in my writing, though I think a well placed abbreviation or acronym can be quite helpful. Hope you agree.

Friday, December 4, 2009

Field-work free verifications


As promised, I commented further on the topic of verification firms avoiding "field work" in our newsletter. A colleague from another verification provider responded with the following: "I was reading your most recent article regarding verifiers who do not conduct fieldwork.  I just wanted to let you know that I, too, am disheartened by this process.  I do not understand how you can do a verification without going to your client’s office.  I am talking to a prospect now that told me that their verifier had not been in their offices for more than 5 years, yet they still get a verification report (and, by the way, they are not compliant)."

So, we're not alone in our disdain for such a practice. And, as evidenced by this individual's experience and observation, the absence of field work can often mean that the firm is non-compliant.

Should field work be mandatory? Unclear. But it should be considered "best practice," at a minimum. 

Thursday, December 3, 2009

Dealing with breaks


A client recently asked about dealing with "breaks" in performance. First, what IS a break? We'd define it as a temporary loss of discretion over a client's assets, during which time no trading can be done. Breaks can be caused by changes in custodians as well as for other reasons. Another term for a "break" is a "gap." I opined on this topic a few years ago in our newsletter, regarding the calculation of returns (that is, can you link across gaps for returns). The client's question had to do with GIPS(R).

If you search the GIPS Q&A database you'll only find one item dealing with this topic, and it doesn't really address temporary breaks. I recall discussing this a few years back with a group and we couldn't arrive at any clear consensus. Some thought ANY break meant that performance stops, while others felt that there should be an assessment as to whether or not there would likely have been any trading during the break: if not, then what's the harm in linking across it?

I tend to be in the latter group's camp: that is, when one has a break, they should determine the likelihood of trading occurring. If, for example, the manager is very much a buy-and-hold manager, who trades infrequently, then a gap of even a few weeks might not cause a problem. However, if the manager trades almost daily, then even a short break would be problematic.

Ideally, the manager has other similar accounts that they can compare the account-with-the-break to, to determine if there truly was an absence of trading. This is where the verifier can come in...to provide an additional degree of analysis.

It would be nice to see something formal regarding this topic, but for now there is little guidance. Hopefully mine won't conflict with anything that comes in an official capacity.

Tuesday, December 1, 2009

Reflections on an old Chinese statistical joke


I've mentioned in the past the value I'm seeing in a design book (Measurement, Design and Analysis, by Pedhazur & Schmelkin) I'm reading for a course. The authors reference a book by H. Zeisel (Say it with figures) who "pointed out that, according to an old Chinese statistical joke, the rate of mortality among people who are visited by a doctor is much higher than among those who are not visited by a doctor."
 

Reflect for a moment on this joke. Once you get it, think about how this applies to the world of GIPS(R) verifications, when a non-random approach is used.

If the verifier selects only a certain group of composites to review (e.g., "marketed"), might it be quite likely that they will conform with the standards, especially if the firm being verified knows that there's a greater likelihood of only these being checked?

These non-random verifications can be likened to what Pedhazur & Schmelkin refer to as "quasi-experimental designs," "that suffer, to a greater or lesser extent, from serious shortcomings and pitfalls...[and] that utmost circumspection be exercised in the interpretation of the results, and in conclusions...based on them."

Perhaps I'm beginning to sound like a broken record (whatever a "record" is), but by continuing to periodically bring this subject up I am hopeful that the GIPS Verification Subcommittee will take action to come out in opposition to such practices as they are fraught with problems.