Thursday, October 31, 2013

If God didn't want us to lie, He wouldn't have invented politicians

I've been wanting to use the line in today's heading, and stumbled upon a way (though it may be a stretch). I came up with it recently, and think it's clever (but I'm biased a bit). As a former politician, I am fully aware of the linkage between the art of lying and politics. But enough of that. How did I decide I could use it here?

Well, I had a conversation recently with someone from a public pension fund, who told me that she is under some pressure from one of the state's elected officials to bring their group into compliance with GIPS(R) (Global Investment Performance Standards). I thought this was excellent.

Although I haven't yet had the chance to chat with this fellow, I'm guessing that the recognition that the Standards promote ethical behavior, full disclosure, and transparency, as well as being seen as best practice, are enough reasons to justify this step.

It's way too early to tell whether compliance by asset owners will catch on, but here's at least one case where it's likely.

Thursday, October 24, 2013

DON'T ONLY "show me the money"

A memorable line from the Tom Cruise/Cuba Gooding movie, Jerry Macguire, is "show me the money," uttered multiple times by Gooding. As a result, it has become a part of our society's broader lexicon.

This reinforces the point that we occasionally look more at the money than we ought to, as I pointed out in my post about Warren Buffett's post 2008 success.

I visited an asset manager's website recently and found that they declared the following (and I am paraphrasing a bit): we turned $1 million into roughly $20 million over the past 29 years.

Impressive, right? Wow! To turn $1 million into $20?

Although I don't have the exact dates, I estimated that the annualized return was 9.89 percent, which also may sound impressive. This was an equity strategy, so let's see how the S&P 500 did. I called  upon my friend, Steve Campisi, CFA, who reported it was 10.10 percent! This firm's return isn't bad, but it isn't what the index did. Again, perhaps with some adjustments to the dates we may see that the manager did better. But using only dollars and not showing a benchmark is, in my view, a "no no."

Monday, October 21, 2013

A webinar like no other ... kind of scary!

This Halloween (October 31) at 11:00 AM (EST), John Simpson, Jed Schneider, and I will host our monthly webinar. This month has a theme ... I wonder if you can figure it out? It's titled:



We will cover a lot of scary and interesting stuff about performance and risk measurement. It will be fun and informative! Hope you can join us.
The Spaulding Group's monthly webinar series is intended as an inexpensive way to provide quality training and education to your staff and colleagues. For many, it's become a "lunch and learn" session.
To learn more or to sign up, please contact Jaime Puerschner at 732-873-5700 or by email at JPuerschner@SpauldingGrp.com.
You'll have fun ... we guarantee it!

p.s., wearing costumes is optional for this program.
p.p.s., this webinar is free for our verification clients and members of the Performance Measurement Form.

Sunday, October 20, 2013

Baseball and material errors

It is rare that people keep track of errors, but baseball, with its love of statistics, does just that: goof during a game and it will usually get recorded.

In last night's Detroit Tigers vs. Boston Red Sox game, we were treated to three errors, two by the same person. And, in my view, there was one more, though it was ruled a hit, that we'll touch on that shortly.

In baseball, we don't distinguish between the degree of the error: that is, there is no mention of one being "material" and the other "non-material." This doesn't mean that commentators, reporters, pundits, and fans won't lay the blame on someone's deeds.
 
Going into the 7th inning, Detroit was up 2-1 over Boston. In that inning, Detroit shortstop, Jose Iglesias, mishandled what might be called a "routine double-play ball," which resulted in the bases becoming loaded. The next Boston batter, Shane Victorino, hit a grand slam home run. There is little doubt that failing to "turn two" was a major factor in the game's outcome.

In the 9th inning, Detroit's Austin Jackson reached first on what was ruled an infield single; I scored it as an error, because the Boston shortstop, Stephen Drew, appeared to mishandle the ball just as Iglesias had two innings earlier. But Jackson didn't make it past second base, and Boston went on to win (5-2).

Material errors truly make a difference they cause results to turn out differently than they would have otherwise.

p.s., It's interesting that offensive errors are not tracked. Detroit's Prince Fielder stumbled while getting back to third base and was tagged out, as the second part of a double play. This base-running error may have cost Detroit a run or two.

p.p.s., Perhaps it was fitting that Detroit's error prone shortstop struck out for the last out of the game.

Friday, October 18, 2013

Timing & GIPS Compliance

There are probably few documents that say as much about timing as the Global Investment Performance Standards (GIPS(r)). But our focus is more limited, and won't address everything that appears.

One of the most important questions is when should a manager become compliant? Must they wait five years (since a firm must report five years or since inception) or at least one year?

First, the issue about "five years or since inception" is often confusing: if the firm has five or more years of history, THEN they must show at least five (building to 10); however, if the firm is less than five years old, then they must report since inception (and again, build to 10 years of annual returns).

Now, to the question: ASAP! That is, as soon as possible the firm should begin to become compliant. And "why?," you might ask. Because the sooner you begin, the easier the process.
  • The firm can design its policies and procedures, and immediately begin to use them.
  • They can add accounts to composites as they are brought on.
  • They won't have to look back over history but rather will be building in "real time."
  • And, the firm can immediately take advantage of their claim of compliance, even though their history may not be extensive.
GIPS now requires firms to show "stub" periods for performance; that is, you must report composite performance if the strategy began with accounts being added within the year. This means that firms can almost immediately have something to report. But just as I discussed earlier this week, returns for short time periods have limited value. That being said, if a firm wishes to grow their business, GIPS compliance is usually a good start. The firm can include a disclosure about the limited time being represented. To me, this would be in keeping with "full disclosure," but it isn't a requirement.

Wednesday, October 16, 2013

Timing & Risk Reporting

How long before you report risk measures as part of your performance?

Returns can be shown for a day, a few days, a week, a month, a quarter, a year, etc. Risk statistics typically rely upon a series of returns. But how many and which ones?

The standard seems to be 36. This conforms well with the usual expectation that we have at least 30 observations for standard deviation to be meaningful. But 36 what? Can we use days, for example?

Well, on the surface that would seem to make some sense, yes? Why not start reporting risk after a strategy or portfolio or composite has been managed for a month or so? We can run numbers, right?

Well, yes, we can. But, we generally think daily returns as being "too noisy." What do we mean by this? Essentially that the fluctuations are too extreme and don't necessarily exhibit anything. We can see choppy days that smooth out a bit when we shift to months. If we were to find days acceptable, why not hours? We could measure hourly returns and track the prior 36; or, the prior 36 minute returns; or, to be REALLY extreme, the prior 36 seconds. This is all possible, but would serve no purpose other than to confuse.

I think we'd actually prefer to use quarters, but the 30 observation rule would mean we couldn't begin reporting until we've been at it for 7 1/2 years; that seems a bit long. Thus, we've pretty much settled on months.

Could we run our risk measures using less than 30 observations? Yes, of course we could; we could, for example use just a few months. But to do so would mislead, I believe. To show someone a return for the prior quarter, for example, along with its associated standard deviation (based either on days or months) would suggest that the number has meaning, when, in reality, it doesn't; the period just isn't long enough (or, if days were used, as noted above, it's too noisy). One must guard against reporting anything that could be misconstrued; we can't mislead our clients or prospects.

What if the client insists on seeing risk measures immediately? Then, of course, do what the client wants; but also include a disclaimer regarding the shortcomings of using a short period to draw any sort of viable or valid conclusion.

GIPS(R) (Global Investment Performance Standards) now requires compliant firms to report an ex post, annualized, 36-month standard deviation. We again see the use of 36 months. Could you report for shorter periods? Again, of course you could. But, I think it's better not to, unless you include a disclaimer; something like "we are showing you a 24-month cumulative return, and so decided to show its associated 24-month annualized standard deviation, but given that this is for a relatively short period, please don't put a lot of emphasis on it, because the statistic doesn't have a whole lot of value until we reach 30 months.

Monday, October 14, 2013

A week about timing, starting with returns

It occurred to me that we could spend some focused time on the issue of time (pun intended). Let us begin with rates of return.

One of the ironies of performance measurement is that the term "time-weighting" has really nothing to do with the weighting of time; it's a term that was carried over from the 1968 BAI (Bank Administration Institute) performance standards. But time is an important component of rates of return.

We can speak of the issue of frequency of valuations. At one time it was not uncommon for firms to value their portfolios annually. Today, that may seem quite odd, but given the lack of computer power and absence of any performance systems, asset managers relied primarily on manual calculations. Over time (that word again), we saw a shrinking of the valuation frequency to quarterly, than monthly, and now it's typically either (a) daily or (b) whenever a large cash flow occurs. Some occasionally speak of "real time" valuations, but I think that would take this topic to an extremity that is ill advised.

When should you begin to report your performance?

But for the purpose of this discussion I am not speaking of such things. Rather, my focus is on how much time is needed before one should begin to REPORT PERFORMANCE!

Let's say that you've begun a new strategy or just opened shop and have your first client. When do you begin to report your rates of return (internally, to your client(s), or to prospective investors)?

The Global Investment Performance Standards (GIPS(R)) have, in a way, answered this, at least for prospective investors, because they now require the reporting of "stub periods." That is, returns for periods less than a year. And so, if you've begun a strategy in October, we probably expect to see returns for the end of the year, starting with November or December (depending on your timing to add an account to your composite).

It seems to be fairly common practice to report monthly returns, and so, if we have a new client we will most likely be reporting returns to them almost immediately.

As far as internal reporting, many firms report daily, weekly, and/or month-to-date returns. This is fine, as it is a way to "keep your finger on the pulse" of what is going on. What is done with the information is important to consider. That is, how much importance is placed on it, how is it being interpreted, and what actions may be taken as a result?

Short-term reporting of returns is all perfectly well, provided we understand that this information has very little meaning. It would be wrong to draw much from just a month or even a few months' of returns. If they are extremely bad, perhaps we look to determine what is going wrong; but if they are extremely good, don't start celebrating just yet. You need more time to properly assess skill.

A gambling analogy

I hope I am not disparaging our profession by bringing up gambling; it seems to fit, at least in this case.

What's the worse thing that can happen to someone the first time they visit a casino? I think it's to win a lot of money. And why? Because this may make them think that
  1. They're pretty good at gambling
  2. It's pretty easy to win
  3. They have a secret strategy that no one else ever figured out.
If they win big, they'll be back. And, eventually they will lose. But, given their earlier success, they may believe that the loss was an aberration (odd, because their win was probably the aberration), and so they will continue to gamble, knowing that another big win is just around the corner.

If they were to record their wins and losses over time, chances are they'd find that, on average, they lost. But they may not realize this unless they gamble over a period of time. The casinos know the odds; they want to keep gamblers in their casinos (thus, the typical absence of windows or clocks) and to keep them coming back (thus the "comps"), knowing that the winners will, on average, become losers. This has to be true; otherwise, where did the money for the fancy and lavish buildings come from?

How much reliance should be placed on short-term investment performance?

The same can be said for investing. Perhaps over a short time someone does extremely well with their investing. There's a reason the industry generally disallows annualizing returns for periods less than a year: it's because a good month or two, annualized, will present a return that is based on the assumption that their performance will continue, when there is no assurance that it will (thus the standard line, past performance is no indication of future results).

If a manger has a good month, two, three, or even several more, it is probably still too early to celebrate, at least too enthusiastically. There's also a reason why institutions typically want at least five years of performance before bringing a new manager on: because a short period of success may be non-sustainable.

We have, on occasion, been contacted by folks who have invested their own money for a few months; they've decided they want to become GIPS compliant. And while we encourage early adoption of the Standards (we'll discuss this later this week), it may be too early for these folks to quit their day job to enter the world of professional money management.

A benchmark for timing may be the requirements for a normal distribution: in general, we want at least 30 observations. We often "round this" to 36 months, which is often the basis for risk measurement (we'll discuss this, too, this week).

In some firms, a new manager who does extremely well in a short time may be prematurely rewarded; this is partly done out of fear that this individual may go elsewhere. But will the success continue? Only time will tell!

Warning labels

Should there be a disclosure with initial short period returns? Perhaps. Something to the effect that this performance is for a short period, and may not yet reflect the true skill of the investor or the strategy; that additional time will be needed to fully gauge this success. And, that success relative to the strategy's benchmark may fluctuate over time, and that by no means should the reader expect continuous out-performance.

Time matters, even with time-weighting; it's just a matter of how much it matters.

Tuesday, October 8, 2013

Explaining what we do ... in a picture

I occasionally describe the formulas we use as a series of bifurcations, starting with time- and money-weighting. And, sometimes I begin with a graphical representation. Well, this morning, I decided to take that graphic to its nth degree, and solicited input from my colleagues, John Simpson and Jed Schneider.

This journey began with the following:

And after a few iterations, it now appears as:

 
 I know ... you can't read it. But, if you click on it you can.

Is it done? No, probably not, but it will be soon. It's a series of bifurcations (and one or two trifurcations (is that a word?) tossed in), which summarizes the world of rates of return. I think it's kind of cool ... how about you?

Monday, October 7, 2013

Far be it for ME to "rain on Warren Buffett's parade," but ...

I was struck by a front page story in today's Wall Street Journal titled "Buffett's Crisis-Lending Haul Reaches $10 Billion." But it was actually the summary under "What's News" that initially got my attention: "Buffett's investments during the financial crisis have brought in $10 billion, a pre-tax return of nearly 40%."

Now, to read "40%" would get anyone's attention, except for one thing: there's something missing! And what's that?

TIME! Over what time period was this return realized? The past day, week, month, year, five years?

A return without time
is worthless.

I became suspicious when I read that a loan of $4.4 billion "is expected to net Berkshire a profit of at least $680 million." Sorry, but I'm not really that impressed with these numbers.

We find the following chart included in the article

which highlights six of the companies Mr. Buffett invested in during the crisis. We see the amount invested as well as the profit (from dividends and appreciation). On the surface, to make $9.95 billion on a $25.20 billion investment seems great, but without the element of time, what's the point?

And so, I decided to do my own analysis on the statistics provided. I calculated the cumulative and annualized returns for each investment, and compared them with the S&P 500 for the same period; and what do we see?


With all due respect to the Sage of Omaha, these returns are not terribly impressive. Unless I am missing something, for each investment the S&P 500 did better; in some cases, MUCH better.

An important point regarding my numbers: they start with the month end value for the S&P prior to the month of the initial investment and end at the end of September 2013. If profits were realized much sooner, then these returns would have to be altered. But not knowing this information, I carried it through the end of last month. Are my numbers perfect? Of course not, as I am missing some key information, but they at least do something that is critically important: include the element of time.

We can never lose sight of the fact that with returns we need the associated time to be included, too, otherwise, it's a meaningless statistic. Just as to hear that some baseball player has a certain number of home runs, without knowing the length of time it means zip!

p.s., in addition to time, a benchmark is also critically important, to fully gauge the success of one's investing.

Friday, October 4, 2013

A geometric approach to materiality (Part III)

I didn't anticipate a third posting on this topic, but I received an interesting note from a reader that I wanted to share and comment on:

I disagree with your thoughts on using "arithmetic relative" when considering material differences in portfolio returns. By reporting returns as a percentage, this is already stating a relative figure (to the value of the portfolio). Therefore, I think the "arithmetic absolute" is a better method of determining any material difference.

I like to think of it this way: why should this decision be taken on luck? If an error occurred in a month where performance was close to stale, why should this be more material than one where performance was fortunate enough to be high (or unfortunate enough to have a large negative position)?

For example:

A fund of $100,000,000 has failed to account for a $500,000 cash flow. It has reported returns of 1% in August 2013 and 15% in September 2013. For simplicity let us assume cash flows occur only on the 1st of the month.

If the error occurred in August, the actual return would be 0.49751% (modified Dietz).

If the error occurred in September, the actual return would be 14.4335% (modified Dietz).

Using "arithmetic relative," August would show 50.249% error and September a 3.777% error. So you would probably want to state a material difference if the error occurred in August but not September. However, in monetary terms the error is the same; it’s just luck which month it happened to occur in.

If using "arithmetic absolute" and having a limit of 50 bps a material difference would be stated whichever month the error occurred in. I think this is much more consistent as the error is of the same value.

As an investor, I would be more concerned with the monetary impact. Percentages are a nice way to compare between funds, but at the end of day profit or loss is where my concern would lie. Hence, I still believe "arithmetic absolute" is more appropriate in determining material differences.

This reader raises a very interesting point: yes, both the mistake and the resulting magnitude in the error are identical, at least from an absolute perspective, so why wouldn't we treat them the same?

My suggestion that arithmetic relative or, better yet (it appears) geometric, is better than arithmetic absolute to determine materiality for errors has to do with utility theory. That is, does one experience the same reaction when they see an identical difference in absolute terms (e.g., an error 1.00%) when the returns are low (e.g., 0.25% to 1.25%) versus when they're high (e.g., 27.35% to 28.35%)? I suspect not. Going from 0.25 to 1.25 is a big jump (in relative terms), while from 27.35 to 28.35 the increase does seem to be as great.

The point, I believe, rests on what your definition of "materiality" is. Not only do the GIPS(R) standards (rightly) fail to prescribe thresholds for materiality (leaving that properly in the hands of the compliant firm), but it also lacks a definition for the term. My belief is that in the context of errors, it's a change that would cause the reader to have a different perspective in the information shown. Of course, we all react differently, so it's impossible to know for sure what this would be in every case, so we base it on our own best judgment; perhaps the "prudent man rule" applies here.

As an analogy, if your child came home and they said they got an A on an exam, but later said they were mistaken, it was actually an A- or A+, would your response be significantly different? But, if they said it was actually a C? Should the policy be consistently applied based on the magnitude of the error (50 or 100 bps, for example, in absolute terms) or the likely response to the error, using our best judgment?

When I teach our firm's attribution class I occasionally address the issue of proportionality, and use weight lifting as an example. I used to regularly lift weights, so I have some familiarity with this topic. If, for example, you're engaged in a particular exercise where you typically begin with 20 lbs, then go to 30, then to 40, you are increasing each time (in absolute terms) by 10 lbs. But, if you do a different exercise where you start with 120 lbs, and go to 130, and then 140, you are again bumping up by 10 lbs each time. But, do you think that you feel the same increase between the different weights? I strongly doubt it. Going from 20 to 40, for example, is a doubling of the weight, while going from 120 to 140 is only a small percentage increase. This analogy isn't perfect, but hopefully it helps.

In reality, if you prefer arithmetic absolute, that's fine with me; most firms seem to use this approach. Plus, it is probably easier to implement.

This exercise has allowed me to devote additional time to this rather interesting topic, to provide some examples, and to craft (what I believe is) the first attempt at a geometric approach (as noted a couple days ago, I'm sure Carl Bacon is proud, and perhaps a bit envious that he didn't think of it first!). I also want to thank our reader for submitting his comments, as they've allowed me to ponder this a bit further and offer some additional perspectives.

Care to chime in? Please do!

p.s., Sadly, I had to stop lifting weights some time ago because I was often accused of using steroids.

p.p.s., A more detailed review of this topic will be presented in this month's newsletters.

Thursday, October 3, 2013

A geometric approach to materiality (Part II)

When comparing geometric and arithmetic attribution, the following are often cited as advantages of the former:
  • it's compoundable (meaning, that it links attribution effects over time without creating temporal residuals)
  • it's convertible (that is, when we convert the returns from one currency to another, the excess return will be the same)
  • it's proportionate (meaning, it is sensitive to the differences in return sizes, given greater importance to outperformance for smaller returns than larger).
I am not looking to debate the merits of geometric attribution in this post, as I believe there are very good reasons why arithmetic rules! But the third point speaks to the issue I raised yesterday. This occurred to me recently, and it seemed to make some practical sense, but  I thought it appropriate to run some numbers.

Let's consider the cases show in this table:


To have a threshold of 100 basis points for materiality may seem high, but I know many firms who use it.

As I suggested yesterday, the difference between the original and corrected returns means more when the returns themselves are small than when they are large; but it is impossible to show this when you are constrained by an arithmetic absolute method to test materiality; that is, when you simply take the difference between the original return and the corrected one. In this table we see that in all six cases the threshold of 100 bps is reached: but the difference between 0.25% and -0.75% surely is felt to be more significant than between 27.35% and 26.35 percent.

If we choose to go with an arithmetic relative approach we see how the sense of proportionality appears. The relative difference at the lower end is huge, because the difference between the original return (0.25%) and the corrected (1.25% in one example and -0.75 in the other) is 400 percent. But when we get to higher numbers we see this drop significantly. I am compelled to think that the 400%, though mathematically correct, is a tad hyperbolic, as one would not really think that the difference warrants such a huge score.

I am unaware of anyone who is employing the geometric method, but I think it has merit, and should at least be considered. Because the returns on the left hand side of the table are small, we see the same 1% reported as we do with the arithmetic (though in reality, if it weren't for rounding they would be a bit less). We see that the other four examples fail to meet this threshold, and I think rightly so.

I have constructed additional examples and will present them, along with additional details on this idea, in this month's newsletters. In the mean time, feel free to comment if you'd like.

p.s., Regarding the acceptability of a threshold of 100 basis points: I tend to favor lower levels (e.g., 50 bps) but accept this level, especially given that industry colleagues I respect greatly have adopted such a level. In general, I would not find it acceptable for the level to be higher.

p.p.s., My use of the adjective "temporal" in regards to residuals was to distinguish the residuals that can arise from linking arithmetic effects over time, with single period residuals that arise frequently from the use of a holdings-based attribution model.

Wednesday, October 2, 2013

A geometric approach to materiality (Part I)

I think my friend Carl Bacon will be proud of me for this post (no, I haven't been completely won over; I just see some merit in this limited case!).
 
Talk is cheap: show me the numbers!

The subject of materiality came up during this week's monthly Think Tank webinar, and it caused me to reflect upon it a bit more than I have in the past.

We often opine on different approaches to defining materiality, but until you actually look at real numbers, it is difficult to judge one approach versus another. And, as for approaches to defining materiality, I believe that there are basically two in use today:
  • Arithmetic Absolute: where you simply subtract the corrected return from the original, and if the absolute value is greater than some threshold (e.g., 50 basis points), you classify the error as being "material."
  • Arithmetic Relative: where you take the absolute difference and determine the percentage change it represents relative to the originally reported return (e.g., if the arithmetic absolute is 0.50%, and the original return is 15.00%, divide 0.50 by 15.00 to get 3.33%).
I tend to prefer relative, because it is my belief that absolute differences mean more when returns are small as opposed to when they're large. For example, let's consider age. Our younger grandson, Caden, is one year old. Actually, he'll be two on November 21, so he's roughly one year and ten months old. We normally state ages this way for young children, yes? Why? Because to only use years will mislead, as there is a big difference between a one year old and an almost two year old.

I will be 63 on November 11. If I were asked my age, would I say that I'm 62 and ten months old? Only if I were truly anal, and then probably be even more granular.

Caden's brother turned four on August 1. If I mistakenly told you he was three (and at my age, senior moments occur more frequently), I'd be off by one year. Given that there is a pretty big difference in maturity between a three and four year old, we'd probably agree that this was a "material" error. However, if I mistakenly (?) told you I was 61, would that one year error be material? Probably not, since I haven't matured very much since I was 61.

And so, perhaps absolute differences aren't the best approach. What about relative? I think this is clearly superior, because there would be a 33% error in the case of my mistake with Brady, and a significantly smaller number when incorrectly reporting on my age.

But then after our monthly call it occurred to me: what about geometric?

Well, this post is already a bit too long, so I will save that discussion until tomorrow!

p.s., to learn about The Spaulding Group's Think Tank, please contact Patrick Fowler.