Thursday, January 31, 2013

Spinning with GIPS, and performance in general

The concept of "spinning" is often associated with politics and the news: that is, how one spins a story, so as to alter its appearance or focus. Spinning thus is often seen in a negative light, as it may appear to be a trick to present something differently to better suit the presenter's goals or intentions.

There are times when working with GIPS(R) (Global Investment Performance Standards) verification clients when we spin something, in order to meet the client's goals or objectives. I don't see this as a negative; rather, I see it as taking advantage of some of the flexibility that's inherent with the Standards.

The Standards cannot address every single issue that can arise; and so, we have to look at the rules and determine how best to handle what we're presented with.

One client told me they wanted to raise a composite's minimum, to reduce the dispersion that often appears. It turns out that the dispersion occurs chiefly because of smaller accounts, that are more sensitive to cash flows. Another issue with smaller accounts is that they occasionally have odd (not round) lots, which often mean higher transaction costs.

An important point to make before I continue: the firm does not market to these smaller accounts; their threshold for marketing is actually much higher than their current or proposed minimum.

Is the firm able to execute the composite's strategy for accounts below the proposed new minimum? I.e., for these "smaller accounts"? The answer is, "yes," which would, at least at first blush, suggest that they can't raise the minimum.

Let's consider spinning this a bit. Since these smaller accounts are (a) susceptible to being influenced by cash flows and (b) incur higher transaction costs, is it reasonable to say that they are therefore not representative of the composite? That is, would the firm consider using one of these accounts as the "representative portfolio" for the composite? The answer: "no."

Therefore, I think it is reasonable to adjust their minimum. The smaller accounts exhibit returns that are not like those of larger accounts, and the firm does not market to accounts at or near this level.

Did we use some "spinning"? I think so. But, I think it is reasonable. Spinning is not necessarily a bad thing. It simply means that we alter how we look at a situation. We can do it with GIPS, other aspects of performance, and much more.

Your thoughts?

Wednesday, January 30, 2013

A "work around" for portability

I had a discussion this week with a firm that is thinking of bringing someone on. That person meets two of the three criteria for GIPS(R) (Global Investment Performance Standards) portability:
  • Substantially all the decision makers are coming along
  • Will continue to manage in the same way as historically has done
The challenge, which is typical, is records. What to do, what to do?

Fortunately, the manager has access to one or two accounts, meaning access to custodial records. BUT, not enough records for GIPS. This performance can be show as a "rep account," and as "supplemental" to the GIPS presentation. Not ideal, but can be done. With, appropriate disclosures!

Friday, January 25, 2013

Dealing with the underfunding of pension funds

I recently interviewed Phil Page of Cardano for The Journal of Performance Measurement(r), regarding the all-too-common situation that many (most?) pension funds, both private and public, are facing: underfunding. Phil identified three possible solutions:

1) increase contributions (from the company, for private; from the taxpayer, for public)
2) increase returns (most likely from taking on more risk)
3) reduce benefits.

When I was the (part-time; meaning only 25 hours a week!) mayor of the Township of North Brunswick (2000-2003), the New Jersey state police pension fund was OVER funded; and so, we got a gift: we weren't obligated to provide our traditional funding, which saved us around $1 million a year, which we used to reduce property taxes. A few years later, after I was out of office, and as a result of the crisis started by the subprime mortgage debacle, the fund was severely underfunded, meaning more money was needed from the municipalities, meaning increases in property taxes.

It appears evident that many pension funds are attempting to increase their returns. Just this week, in The Wall Street Journal, we had two articles address investment methodologies being employed ("Money Magic: Bonds Act Like Stocks" (1/22/13) and "Pensions Bet Big With Private Equity" (1/25/13)). Have the funds' appetites for risk taking increased, or have they concluded that the alternatives (increased funding / reduced benefits) are steps they wish to avoid?

On the public side, I can see how we got into the mess we're in. Politicians, seeking the support of police unions, for example, were all too quick to grant attractive pension benefits (I recall hearing of a municipality in California, that gives its police officers 85% of their highest pay, after working something like 25 years...AMAZING!). When the benefits are initially granted, or even boosted, the politicians in office are immune to their impact; at least the brunt of it. And so, they reap the benefits of union support, while passing on the funding challenges to their successors: clever. Imagine what will happen today, if a mayor or governor tries to get their police unions to agree to reduce their benefits.

As for additional funding, as a taxpayer I have little interest in providing more of my money to pension funds that have, due to no fault of mine, run low, so that the beneficiaries can receive pensions that most of us in the private sector could only dream about.

While I am happy that I am no longer a politician, I fear for how this mess will be resolved.

Thursday, January 24, 2013

Investment success: skill vs.luck

I recently finished Michael J. Mauboussin's newest book, The Success Factor: Untangling Skill and Luck in Business, Sports, and Investing, and strongly recommend it. He raises a great deal of interesting points, and I hope to interview him for The Journal of Performance Measurement(R).

He discusses the use of a luck/skill continuum, ranging from activities which are 100% luck (e.g., lotteries) to 100% skill. He points out that sports can be placed on this scale, where we will find that, for example, the results of basketball games are much more based on skill than baseball games.

Investing involves a certain amount of luck, too. We know that investors chase returns, and so can expect that the #1 mutual fund for 2012 will see an increase in assets under management shortly after the list is published. It's also likely that this fund may be relatively small, and perhaps with not a very long track record. This won't stop investors who (think they) "know a good thing when they see it."

Investor John Paulson received a tremendous amount of attention as the result of his phenomenal success in shortselling subprime mortgages. I questioned whether it was fair to reward him with such accolades, and was criticized by at least one fellow who revered Paulson's investment acumen. Clearly his amazing success in 2007 was worthy of attention. But how much luck factored in? Has he continued to have similar results? While I don't monitor his performance closely, it appears the results have been somewhat mixed. How much luck versus skill played a role is difficult to assess.

Sadly, measuring luck is difficult. As Mauboussin points out, when there is a fair degree of luck involved (e.g., in black jack), one can make good decisions (e.g., doubling down when the dealer shows a "6") and do poorly, or make bad decisions (doubling down when your cards total 12 (yes, I've seen this done) and (a) not "busting" and (b) winning the hand!).

How much do we want to talk about luck when it comes to investing? How much would we want to let our clients or prospects know about the degree that luck played in our investment successes? Imagine if performance attribution models were enhanced, so they could represent, for example:
  • Allocation effect
  • Selection effect
  • Interaction effect (I have to have this, especially for my friends Carl & Steve)
  • Good luck effect
  • Bad luck effect.
And what if most of the excess return came from "good luck." How would that go over?

Tuesday, January 22, 2013

Belief in small numbers

Nobel prize winner Daniel Kahneman, and his long-time colleague, Amos Tversky (who would have been awarded the Nobel Prize, too, but sadly was deceased at the time the prize was awarded), wrote an article for the Psychological Bulletin in 1971 titled "Belief in the Law of Small Numbers."

Granted, most of us have heard of the law of large numbers; but the law of small numbers? Basically, they point out how we tend to put more stock in results from limited samples than we should.

A blog post earlier this month raised the question of using less than 36 months for standard deviation, so today's post is, in a way, a continuation of that theme.

We are regularly approached by individuals who want their track record verified. Sometimes their track record is merely a few months' old. Granted, becoming compliant with GIPS(R) (Global Investment Performance Standards) is better done as early in the life of the firm as possible, but what these situations typically show is someone who has had great success for a very limited time. Often, it's with their own money. The real question: will it continue? And, as the saying goes, only time will tell.

My wife and I have become big fans of The Big Bang Theory, and watch it whenever we can (including reruns, since most are new to us). In one episode, Howard Wolowitz proposes to Bernadette Rostenkowski, and she understandably turns him down, pointing out that at the time they had had only three dates. Granted, some folks DO get married after a very short courtship, but most take a while before committing.

The author and speaker Harvey Mackay wrote of requiring prospective employees to be interviewed by many folks in his firm (and on occasion, some of his clients or vendors, too). He would engage individuals in multiple meetings himself, as he felt that just one experience wasn't sufficient.

And so, there is evidence that in some ways we do require more than just a small sample; but too often we are prepared to judge based on a few.

Institutional investors typically want a minimum of five years' performance before considering a manager; the retail world is not as disciplined. And perhaps it's a good thing, because if no one was prepared to give managers money to invest, few would make it to five years.

But guidance is important when presenting a track record, in performance and risk terms, for a short time period. Perhaps additional qualifying language is in order!

Thursday, January 17, 2013

Outcome oriented / client centered investing

The concept of orienting your investing so that it's geared to the client's requirements is getting more attention of late. P&I had an article on this subject in their November 12, 2012 issue, and Steve Campisi, CFA has addressed this at several of The Spaulding Group events, including our Performance Measurement, Attribution & Risk (PMAR) conferences. Steve and I are in discussions with another colleague about giving this subject even greater attention and focus.

One of my concerns is the practicality of having each of a plan sponsor's managers sensitive to the same objective; oriented to the same goal(s). Does this make sense? Is this appropriate?

You may have heard of the "Myners Report" (aka, Myners Review) which was delivered by then Gartmore chairman Paul Myners back in 2001. The actual report is about 200 pages long; I recommend this, as well as my books, as effective insomnia cures.

I won't pretend that I've read the entire document, though I did review much of it when it was first published, and I recall strong support for the use of liability-related benchmarks.

The question, I think, should be to whom these benchmarks are used. I'd think the plan, itself. As for the managers, they should be selected as part of their (the plan's) overall strategy to meet or exceed their benchmark. However, judging these managers by this same benchmark seems incorrect to me. Your thoughts?

Tuesday, January 15, 2013

Valuing a portfolio through a transition

We received an interesting question from a client, which seem appropriate to discuss here:

We have an issue where we are going back and forth with our custodian regarding the proper treatment for a transition:
  • Manager B is scheduled to receive assets from Manager A at the end of day on January 4
  • on January 4, the custodian transfers the securities from Manager A based on January 3 prices (e.g. $5.20Million)
  • the securities are valued at end of day January 4 (e.g., at $5.10Million)
  • the result is an unrealized loss on the securities on January 4
What prices should be used to value these securities? Instructions were given to have the securities transfer at the end of the day on January 4, so the transfer amount and portfolio value were the same and no gains/losses occurred. Under the scenario above, there is a loss that we believe should not occur.

The answer should be based on "discretion." That is, at what point does Manager B have discretion over the assets. This doesn't occur until the end of day on the 4th, and so to use the pricing at the start of the day (i.e., from the end of the prior day) would be unfair and inappropriate, since the manager cannot act on these shares until the transfer occurs.

What about Manager A? Here, we would expect the valuation to end, at least from a performance standpoint, at the point they no longer have discretion. This may have been a few days or even a few weeks or longer before the transfer occurs. From a GIPS(R) (Global Investment Performance Standards) perspective, we'd expect the portfolio to have been pulled out once the manager lost discretion.

And so, we would expect to see a "gap," where no one has discretion.

Manager A was probably reporting to the client through  December, and will probably provide a final report in January, which will reflect the transfer. We would expect to see it priced at the close of business on the 4th, though this may not occur.

The key point here, I think, is that for Manager B, control begins at close of  business on the 4th, so that's when the pricing should occur.

Friday, January 11, 2013

Learning from Cervantes

I am reading, or more correctly, listening to, the classic, Don Quixote, by Miguel de Cervantes. It is a massive work, which rightly deseves the praise it has received.

Perhaps not surprisingly, if you've been a reader of my commentary for long, I have found something within this text to apply here.

In Part II, Chapter 18, we are introduced to Don Lorenzo, a young poet, who is perplexed by Don Quixote's apparent madness, and yet at the same time, great ability at discourse. He asks Don Quixote, "What sciences have you studied?" And the knight's reply is as follows:

"That of Knight Errantry, which is as good as poetry, and a finger or two above it. It is a science that comprehends within itself, all or most of the sciences in the world. For he who professes it must be a jurist, and must know the rules of justice; distributive and equitable, so as to give each one what belongs to him and is due to him. He must be a theologian, so as to give a clear and distinctive reason for the Christian faith he professes, wherever it may be asked of him. He must be a physician, and above all a herbalist, so as in wastes and solitudes to know the herbs that have the property of healing wounds, for a knight errant must not go looking for someone to cure him at every step. He must be an astronomer, so as to know by the stars how many hours of the night have passed and what climate and quarter of the world he is in. He must know mathematics, for at every turn, some occasion will present itself to him. And, putting it aside that he must be adorned with all the virtues, cardinal and theological, to come down to minor particulars. He must, I say, be able to swim as well as Nicholas or Nicolau the fish could, as the story goes. He must know how to shoe a horse, and repair his saddle and bridle. And, to return to higher matters, he must be faithful to God and to his lady. He must be pure in thought, decorous in words, generous in works, valiant in deeds, patient in suffering, compassionate towards the needy, and lastly, an upholder of the truth, though its defense should cost him his life. Of all these qualities, great and small, is a true knight errant made of."
Quite a clear and detailed description of the qualities a knight errant is to possess, yes?
And so, if you were to be asked, what are the qualities that a performance measurement professional should possess, how would you reply? Please contemplate this, as I will take this up in this month's newsletter.

Thursday, January 10, 2013

Strategy vs. Tactics: Lessons from a Japanese Market Investor

Frank Sortino, PhD, occasionally tells the following story:

There was a pension plan that was looking for a Japanese market investor. They found one who had superior performance. Unfortunately, he also had a high tracking error. When their concerns were presented, the manager explained that much of his performance came from the fact that he avoided the banking sector; likewise, the high tracking error could be attributed to this decision. While this seemed to make some sense, the pension fund trustees were still concerned about the tracking error, to which the manager replied, "okay, how many bad banks would you like me to invest in?"

Question: should the benchmark have been void (i.e., ex) the banking sector?

[pause, while you ponder this question]

Answer: no! The manager's strategy includes the entire market; his tactical decision is to avoid the sector. The benchmark should not be adjusted for tactics.

Question: why not? Why shouldn't the benchmark be ex banking?

Answer: if it was, then it would be difficult to determine if this was a good decision on the manager's part. Perhaps banks soared during the period; by having the benchmark ex banks, this wouldn't be known. If, however, this was a good tactical move, then the excess return (as well as, presumably, the attribution analysis) would reflect this.

Question: what if the CLIENT requested that the manager avoid a particular sector; would the benchmark then be void the sector?

Answer: yes! Because now, the strategy excludes the sector, and the benchmark should reflect the strategy.

Question: other than buying bad Japanese banking stocks, might there have been another solution to the prospect's concerns with the high tracking error?

Answer: yes! Why not run the tracking error against the benchmark without banks; granted, the benchmark for return comparison (and, for all other statistics, including tracking error) should be with banking; but for the purpose of determining if this tactical decision was the source of the high tracking error, such an analysis would be helpful. Going forward, the manager could report both versions of tracking error to the client.

Related example: A GIPS(R) verification client of ours' composite description reflects exposure to stocks, bonds, and commodities. The benchmark should include the strategic allocation for each asset class. However, when we look at a representative portfolio, we discover real estate (through REITS), too. Should the benchmark include a representation from this asset class?

Answer: only if real estate is, in fact, part of the strategy. If, however, it's a tactical move, to take advantage of a short term opportunity, then we would not expect to see real estate present. But, if we find that going forward, real estate has become continually present, we would argue that it now seems to be part of the strategy (i.e., a broadening of the strategy), and would now need to be included.

Make sense? Please chime in!

Monday, January 7, 2013

An example of why we don't annualize for periods less than a year

This weekend's WSJ reports that "U.S. stocks ended the first week of 2013 up 3.8%."

Some might want to annualize this figure, to get a sense of what the year will look like. To do so, we can simply raise 1.038 to the 52nd power, and subtract one.

And when we do this we get 595 percent!

What a great year 2013 will be.

Except, of course, that this is assuming that the remaining 51 weeks will perform as the first week has. Which, of course, is a tad unlikely. To annualize for periods less than a year violates the rule that past performance is no indication of future results, because we are taking it and applying it to the future.

We see folks annualize non-performance figures all the time. For example, a few years back, when the Yankee's Alex Rodriguez began the year hitting an extremely high number of home runs, someone annualized it to show that he would set a new single season record by quite a margin if he continued that pace. He didn't.

Sports folks aren't under the same rules that performance folks are; we know the rule: don't annualize for periods less than a year.

p.s., where's my clipart? The blog software is apparently having problems, so there won't be any until it's fixed. DARN!

Thursday, January 3, 2013

Are GIPS verifiers required to search for fraud?

Over the past few years there have been at least two lawsuits filed, where an asset manager had committed fraud, either with a Ponzi scheme or some other means, and where the firm had claimed compliance with GIPS(R) (Global Investment Performance Standards). When someone suffers, financially or in other ways, it is often the case that they use a "shotgun" approach to be compensated for their loss: that is, they will sue whoever they can, even if their relationship to the crime/infraction/accident is tenuous. And so, it is perhaps not surprising that in these cases, the claimants have gone after the firms' GIPS verifiers, for failing to detect the fraud.

When this first surfaced I was asked by one publication what my thoughts were. While recognizing that fraud detection isn't part of the verifier's mandate, if a verifier stumbles upon something that looks improper or illicit, I suspect it would be reasonable to expect them to pursue the matter and report it to the appropriate authorities. Consequently, if fraud is discovered or suspected, it would be appropriate and, arguably expected, that the verifier take some action. A representative from the CFA Institute responded that verification isn't designed to detect fraud, which is almost a black-and-white statement, which I do not disagree with.

The defendant in one lawsuit is seeking a "summary judgment," meaning they are arguing that the plaintiff lacks enough of a case for the suit to move forward. They have raised several points, including a claim that they are not liable for "professional negligence," since the they are not responsible for detecting fraud.

The plaintiff, on the other hand, argues that based on the work the verifier is to do in the course of the verification, that they should have discovered fraud. Further they state that the fact that they weren't hired to detect fraud (who would be?) doesn't excuse them from the responsibility to report it.

The legal authority who is reviewing these materials decided that the defendant's argument, that fraud discovery was beyond the scope of the engagement, is inappropriate, and the defendant appears not to have been successful, at least in this point, to achieve their goal to have the case dismissed.

For verifiers this may ultimately be a "landmark case," if they hold that the verifier has an expected responsibility to detect fraud. Granted, this is a suit that is filed at the state, not federal, level, and so its appropriateness in other cases, in other states or countries, may be weak from a precedent standpoint, though I suspect that future plaintiffs and their attorneys will at least reference it in their arguments.

It's time for the GIPS verifier subcommittee to address this topic outright: to state clearly what the verifier's responsibility is for fraud detection. Without such a formal statement, verifiers may have little to stand on.

I, for one, feel that verification should not be expected to detect fraud. That being said, I also feel that the verifier must be sensitive to the potential for fraud, and if they discover it, to take appropriate action. I am confident that the subcommittee can come up with language that is appropriate. To do nothing, especially given this and other cases which are currently being pursued, would be a disservice to our industry.

Wednesday, January 2, 2013

Should there be a minimum number of months to calculate standard deviation?

You're probably familiar with the expression "just because you can do something, doesn't mean you should." There have been several times where this saying has come in handy, and this post deals with one more case: standard deviation.

In performance measurement we can calculate standard deviation across accounts within single time period, to measure dispersion. For example, when reporting your 2012 composite return for your GIPS(R) (Global Investment Performance Standards) composite presentations, if you have six or more accounts that were present for the full year, you're required to report a measure of dispersion, and standard deviation is often the measure of choice. And so, standard deviation can serve as a measure of dispersion.

Standard deviation can also be used to measure volatility (or variability, if you prefer) across a time period. GIPS now requires compliant firms to report an ex post (i.e., backward looking), annualized, three year standard deviation on an annual basis, for the composite and its benchmark. It is measured against the prior 36 months. If the firm does not have returns for the prior 36 months, they are exempt from reporting it. Thus, standard deviation can be a measure of volatility, variability, or risk (as standard deviation in this form serves as a risk measure).

I recently read Michale J. Mauboussin's new book, The Success Equation: Untangling Skill and Luck in Business, Sports, and Investing. He writes about the problems associated with relying on a small sample size. He cites examples which other authors, too, have referenced. For example, it is often the case that small counties in the United States will exhibit the lowest rate of incidence of certain forms of cancer, which may prompt some to think that moving to smaller counties might be best for their health. Until they also learn that the highest incidence of these forms of cancer are also found in small counties. The point: we often see outliers arrive in smaller sample sizes.

This caused me to think about standard deviation. I am sometimes asked whether it is appropriate for a firm that doesn't yet have 36 months of returns, to show standard deviation for the period they do have returns for. For example, if they have 12 or 24 months. I usually say that I think it's fine to do this. But I'm now wondering, is it?

Standard deviation assumes a normal distribution, and in statistics we expect to see a minimum of 30 periods included in the calculation. The problem with a smaller sample size (12 or 24, for example) is that the results may be misleading.

In our industry, the "general rule" is not to annualize returns for periods less than a year: this isn't because it's a small sample size (although perhaps that might actually be a valid reason, too), but because it causes you to use past performance to predict future results. Should there be a similar rule not to report standard deviation in cases where the firm doesn't have at least 30 months of returns?