You're probably familiar with the statistical concept of type 1 and 2 errors. One way to look at it is that one addresses the case where you think you're right, but you're wrong; the when you think you're wrong, but you're actually right.
A thought occurred to me last night that the classic Clint Eastwood movie Dirty Harry shows this in two parts (warning, graphic content):
In one case, the criminal was thinking that Harry had fired only five bullets, but he was wrong, he had fired six, and his gun was empty. In the other case he thought he had fired six bullets, but he was wrong, too, as Harry had one bullet left. But, which was the worse of the two errors?
In evaluating Type I and II errors, it's helpful to investigate the impact of both errors, to determine which, if we had to, we'd prefer to make.
I'm finalizing a study on attribution, where, among other things, I've found cases where holdings-based attribution can cause effects to show the wrong sign. For example, we might see a positive allocation effect, when it should be negative, or a negative allocation effect when it should be positive. Both are errors, but they mean different things.
If we mistakenly report allocation that actually hurt performance as being positive (i.e., contributing to performance), then we're misrepresenting our skill to the client, telling them that we did something right, when we didn't.
If we report allocation as a negative, meaning it hurt performance, when in reality it was positive, contributing to the outcome, then we're hurting the manager. One might even suspect that a manager could be terminated for failing when they actually succeeded.
Again, both are errors, but which is worse? I guess it's a classic case of, it depends.