Yesterday's WSJ had an article by Brian Costa titled "The Rookie and His Pitching Bible," about "the [New York] Met's most promising rookie," Matt Harvey, who relies on a "pitching bible" he created and maintains with much care and attention. "[Harvey] records every mechanical adjustment he makes, even if only temporary, as a reference for the future. When he pitches well, he notes what he did right. When he pitches poorly, he types in a summary of his mistakes, be they mechanical or mental."
In the world of investment performance we refer to such analysis as "attribution," identifying and classifying what worked, and what did not. And while we don't typically refer to our attribution reports as our "performance bible," perhaps such diligence, across time, wouldn't be such a bad idea.
If we look over the past month of July and evaluate what worked and what didn't, that's great. But, how does that compare to June, May, or April? Or last July? Is there a pattern? Have skills shifted? Do certain decisions work better at some times, but not at others?
A temporal evaluation, such as what Harvey does, builds upon the single period evaluation, to provide a macro view, which no doubt offers much benefit to the pitcher, and would for the performance manager and his/her team, I would think.