Wednesday, December 16, 2009

Interaction effect: show it or hide it?


One of the controversial topics in performance attribution has to do with interaction. This effect exists in several models, but we'll limit our discussion to its presence in the Brinson-Fachler model. Recall that there are three effects in all: allocation, selection, and interaction (the formulas are shown below).

The interaction effect represents the impact from the interacting of the allocation and selection decisions. There have been several good reasons offered why one shouldn't show interaction; when not showing it we typically change the weight in selection to the portfolio weight, meaning that the selection decision is expanded to include interaction (though this is rarely stated as such).



If we reflect on what the possible results can be with interaction we conclude the following:
  • overweighting (positive) times outperformance (positive) = positive result
  • overweighting (positive) times underperformance (negative) = negative result
  • underweighting (negative) times outperformance (positive) = negative result
  • underweighting (negative) times underperformance (negative) = positive result.
One argument for showing interaction that I posit is that to not do so means that the selection effect will be burdened with negative results when they aren't deserved (e.g., if we have outperformance but underweighting). The response might be "well, in the end it will all work out, because there are times when selection will get a positive interaction effect when it's undeserved". We used to almost universally hold this view regarding returns, thinking that the mid-point Dietz method, for example, was perfectly acceptable, because those times when we penalized the manager by assuming a late arriving flow was present for the full period would be counterbalanced at some point in the future by a benefit of an early flow only being counted for half the time. But we've wised up and now promote much more accurate methods.

During our recent Trends in Attribution (TIA) conference, one panelist pointed out the "flaw" of underweighting times underperformance: a positive result. What DOES one make of this? I think it's easy, after some recent reflection: this shows that the allocation decision was a wise one! That is, they underweighted at a time when there was underperformance (would you propose to overweight?).

In an article I wrote on this topic I proposed that if you want to "eliminate" the interaction effect, then create a "black box" to analyze the interaction effect when it shows up, and to allocate in a conscious and methodical way. I still hold to this belief and still hold to the value of the interaction effect, and oppose any arbitrary assignment to selection or allocation. Space doesn't permit much more at this time, but perhaps I'll take this up again at a later date.

Spaulding, David. "Should the Interaction Effect be Allocated? A 'Black Box' Approach to Interaction." The Journal of Performance Measurement. Spring 2008

12 comments:

  1. Stephen Campisi, CFADecember 16, 2009 at 9:42 AM

    David's analysis is crisp and clear, perhaps one of the best and most intuitive defenses of the interaction effect that I've seen. I was inspired to write my article "Debunking the Interaction Myth" after reviewing David's article defending the interaction effect. And he and I have been enjoying our debate on this topic over the past five years.

    In spite of his spirited defense, his premise is nonetheless flawed. In my opinion, this is really a case of circular reasoning. It begins with the incorrect assumption that the benchmark weighting is the correct weighting to use in calculating the selection effect. Unfortunately, this assumption is incorrect because it does not reflect the decision process of the investment manager. And remember that attribution is supposed to evaluate the impacts of the manager's decisions.

    The Brinson model assumes that the manager has two active decisions: sector weightings and the decision to implement these active weightings either passively or actively. Since the manager has made an active sector decision, we cannot go back and pretend that this decision was not made, which is what happens when you use a benchmark weighting to calculate the selection effect. This is like a "critical path" of decisions, and we must respect a decision that has been made, and make that decision consistent with the other decisions in the overall decision model. The interaction effect is a result of a logical error in the model because it fails to respect the allocation decision. Something cannot be itself and the contradiction of itself (thesis and antithesis.) So, the sector weighting cannot simultaneously be active and passive.

    Second, the practical reality is that you must make an allocation to a particular security if you wish to invest in it. In fact, the active decision is not simply which issue to buy, but how much to buy. So, the amount of an issue held is an inseparable part of the issue selection process; it is not a separate decision - it is part of the selection decision. Fundamental indexes (which hold the same issues as their cap-weighted counterpart indices) are a good example that the active decision process involves relative issue weightings and not simply selection of the names in the portfolio.

    This approach also reconciles attribution for "stock pickers" who often have sector weights that are byproducts of the issue selection process. While they may not be making active sector bets, they do have different sector weights from the benchmark, and these differences are partially responsible for their different relative returns. By using the portfolio weight, we integrate these two factors of portfolio performance.

    After 20+ years our clients are still scratching their heads over the interaction effect. It doesn't make sense to them because it just doesn't make sense - and no amount of debate will change this. It's not that we "just don't get it." Rather, it just doesn't work.

    Again, the decision to buy an issue always includes the decision of how much to buy. That's how the investment process works. But don't take my word for it; go ask a portfolio manager. (In this case, you just did.)

    ReplyDelete
  2. I am not surprised that my learned friend has chimed in on this. The Brinson model does an excellent job in separating two distinct decisions; these decisions (for a top down manager) aren't made simultaneously, but in sequence. To include the portfolio weight in with selection muddies this effect. If we're only dealing with a single manager, then there's probably not a lot of harm, other than perhaps providing some misleading information. But if we truly want to analyze the two decisions separately, then one can't do that by including the interaction effect with selection. I (as I wrote in my article)am perfectly fine not showing interaction...just group it where it belongs based on some sound logic, not an arbitrary decision. The fact that clients "just don't get it" isn't a reason to give them erroneous information.

    ReplyDelete
  3. David,

    Whilst I agree that clients "just don't get it" is no justification not to do something - it is a sympton that something is wrong. By the way its not just the clients that don't get it, asset managers don't get it either - this is no surprise it not a factor that managers manage - it drops out of the maths.

    I agree with everything Steve says. Interaction is not included with the stock selection effect arbitarily - it because that's the way most managers manage their money - they allocate money to sectors and then they pick stocks.

    Manager's simply do not underweight a sector because they expect underperformance within that sector. They underweight a sector because they expect that sector to underperform other sectors. That's the decision that should be measured

    Actually by not allocating interaction to the proper factor you risk losing information - I have seen interaction:

    a) Not shown
    b) Allocated 50:50 between selection and allocation
    c) Allocated by proportion
    d) Allocated to selection one mone and allocation the next

    Regards

    Carl

    ReplyDelete
  4. Carl,

    I love your response. Especially about the part about "not getting it." I guess we can agree to extend this logic then to the use of geometric attribution ... sorry, a completely different topic which we'll address at some future point.

    I guess we (you, Steve & I) can agree to disagree, for I doubt any of us is prepared to waiver from our rather strongly held position.

    ReplyDelete
  5. We have left one of Dave's critical comments unchallenged and we should take a moment to evaluate it. At the TIA conference I had noted that we cannot necessarily prove that something is true (because we may have simply never seen the fact or situation that contradicts that truth.) However, we CAN prove that something is NOT true by finding that single fact that disproves it - and you only need one!

    This is the example I cited which I believe "once and for all" proves that interaction is a logical flaw in the attribution model and that interaction is itself a flawed concept. As Dave stated, if one were to make two rather dreadful investment mistakes (underweighting an above-average sector while simultaneously picking the worst stocks within that sector) we would clearly have two sources of underperformance. We've all been taught that "two wrongs don't make a right." No one disagrees with this logic. So, we apply these two negative impacts via the interaction effect and as if by magic we have produced a source of OUTperformance. Amazing! Following this logic, we should simply make mistakes and then expect outperformance.

    Dave's defense is that if you are selecting underperforming stocks then it's a good thing you underweighted the sector. Interesting. This certainly minimizes the harm done, but it does not produce a positive excess return. That is, it can't turn underperformance into outperformance, any more than we can turn straw into gold, no matter how skillfully we spin it.

    Logic and an understanding of the investment process have to guide the attribution process. The "maths" simply confirm the answer; they don't lead us to any truth. We need more insight and less spin.

    ReplyDelete
  6. While I agree that it only takes one exception to invalidate a theory, I don't necessarily concur that this is what's happening w/an underweighting and underperformance scenario. Underweighting in and of itself isn't a bad thing; in fact, there are times when it is a very good thing. All we know when we see underweighting is that we have a negative value; couple that with an underperformance and we have two negatives resulting in a positive effect. There isn't enough information necessarily available to decide if the underweighting was good or bad unless we look at the return of the benchmark. But in this case what we do know is that the manager underperformed the benchmark. Had we had overweighting, we would have a negative effect.

    Again, my advice is very simple: if you want to eliminate the interaction effect this is fine with me...just don't slam it into selection. Rather, take a moment to evaluate how best to distribute it. This is a simple process that can be automated.

    ReplyDelete
  7. Sorry, but the fact remains that the majority of the arguments in favor of recognizing an interaction effect are based on little more than this: the term originally called "Other" is found in a well-publicized article. Such undefined residuals are usually an indicator of a poorly defined attribution model, especially when these residuals can be the largest source of the excess return we are trying to explain. Remember that a residual is essentially an unexplained effect, so any model that generates a large residual and fails to explain the relative performance is likely to be a flawed model. Such is the case with the interaction effect (really the "Other" effect.)

    We need the insight to recognize the valid parts of any model and the courage to reject the flawed parts. Otherwise, we will make no progress in our search for performance models that truly represent the investment process that they try to evaluate and explain. Let's remember that one of the authors of the Brinson-Fachler model (1985) followed this initial model with an even more flawed model the following year (Brinson, Hood and Beebower, 1986.) That model took any positive-returning sector and evaluated it as a source of excess return, even if that sector produced a below-average return. Once again, by applying simple common sense one would reject this model as flawed. After all, when did overweighting a below-average return sector become a source of excess return? And yet, we have to bear the rather tortured explanations of why even this model is still valid. So, we see yet another example of a published model that is flawed. Frankly, it's a waste of good intellect and persuasion to continue to explain and defend these flawed methodologies. It's time we moved on and addressed the truly important issues facing investors.

    ReplyDelete
  8. I'm taking back what I wrote to Dave several nights ago. I don't agree with the everyone's opinions about attribution here. All 3 writers assumed the final users (clients / portfolio managers) of the attribution report understands what the numbers really mean. Yes, there are some people that do understands but most individual takes numbes at face values or create their own assumptions about thow the numbers are derived.

    Can someone quantify luck in a portfolio and show this in an attribution report? There is no flawd in the math as long as the final users understand the purpose and limitation of the model. I believe the issue resides on the preception, interpretation and knowledge (which are lacking) at the user end.

    When was the last time we hear someone telling a client "you don't know what you're talking about?". We probably often hear this type of comment: "I'm sorry but could you elaborate on your comment so I may further assist in meething your needs!"

    ReplyDelete
  9. There is no question that the recipients need to understand what's going on. And, to say that the users can't understand the interaction effect either reflects poorly on the provider or the recipient. I believe that an explanation CAN be provided.

    ReplyDelete
  10. Sorry Dave,
    I believe providing an interaction effect explanation would only cause more confusion. It is better to hide it than showing it. I already experiences difficulty explaining the math part to people (shown on this blog). Using the logic you have in the paper, which you wrote in the past, I believe would only cause more confusion (not sure if you got a chance to re-read my past respond).

    I find it difficult to think any recipients would accept an explanation of what interaction would mean. I believe Stephen already defined interaction effect as "Logical error" and "Other" (it means model residual to me). My definition of interaction effect is nothing more than the different between portfolio return - return effects of stock selection & allocation. In other words, interaction effect is nothing more than the portfolio's uncontrollable luck. Do you think any portfolio manager would tell this to their client? Maybe using Stephen's responds are better.

    Regardless if the model has flaw(s) or not reflecting how the portfolio manager's their strategy, we should follow a process that I learned many moons ago from HIGH SCHOOL (called: KISS - Keep It Simple Stupid!!). In a relative analysis, we would look at total return of the portfolio and benchmark. Why can't we just break down the portfolio and benchmark into different groupings and than take the differences (Factset calls this Variation model)? I purposed this in the past. The respond was, "We need something that the client would understand!"

    ReplyDelete
  11. Dear David

    Could you please explain the following: In an investment team, supposing we have an asset allocation specialist, a stock picker, and a hypothetical "interactive effect" specialist, who duties were supposedly to add value to investment performance like the other two. How would she go about doing her work? It seems as though she would have to, a priori, know how well her colleagues were going to do before adding weights to the portfolio... How would this creature go about doing her work? Or is it impossible for her to act as whatever value she has to add would simply issue from the actions of the others... an outcome as it were? Alex Pestana, Cape Town

    ReplyDelete
  12. Alex, an interesting question. The only way such an individual could add value would be if they could know, in advance, the outcomes from the other two members of the team, and override them if negative results would result (e.g., if the allocator makes a poor choice while the selector is effective, which would cause a negative interaction effect). I am totally in favor of allocating the interaction effect to the other two effects, but only if it's done after some analysis, to determine the proper assignment of the results. For example, in the case where a bad allocation is combined with good selection, the negative interaction effect would be assigned to the allocator. My article, which I reference in the post, explains this a bit more. Those firms who choose to ignore the effect typically assign it, in all cases, to selection, which, again using the above case, would cause the selector's contribution to drop, even though they had done a good job.

    ReplyDelete

Note: Only a member of this blog may post a comment.