We, that is, The Spaulding Group, are (or should it be, is?) wrestling with a situation where a client may not be consistent in their pre-2011 adoption of "stub periods" for their GIPS(R) (Global Investment Performance Standards) composites (recall that until 1 January 2011, showing stub period performance in your composite materials was an option; and some might argue, not even permitted!). And so this begs the question: "must they (be consistent, that is)?"
The standards expect consistency, but is this in everything a firm does? I would hope not. Surely asset managers should be granted some degree of flexibility, and not be castigated for an occasional, though intentional, lapse.
The Standards shouldn't be seen as constantly putting up challenges before firms that wish to comply. Surely, it must be challenging and demanding, but not in a silly, nonsensical, unnecessary way.
And while I am the first to criticize those verifiers who "work with their clients" in such a way that they ignore clearly articulated and defined rules (and as a result, put their clients at risk), where the rules haven't been overly prescriptive, let not the verifier be the one to introduce new and unnecessary hurdles. Your thoughts?
Subscribe to:
Post Comments (Atom)
I was thinking why Google’s search engine is so successful? Why did they succeed where others failed? Was it because they wrote the most sophisticated code which can better interpret what the application user is searching for or somehow found the optimal path for human-machine communication? I don’t think so. What I believe is the genius design behind their operating structure, is the understanding that we don’t need to build the most technologically advanced system but rather, the simplest form of logic that would produce advanced results. It’s not that the program is so sophisticated as to know what we are searching for. To me, it’s the ability of the user to understand the program’s logic. In other words, after doing 10 searches the user has subconsciously figured out the simple structure of the search engine. The user is trying to communicate with the machine by first understanding the machine’s communicative language. The simpler the concept of the operating language, the easier understood by humans and thus easier to alter our commands to meet the set of rules followed by the machine. It would be impossible for a machine to become so advanced as to follow the unlimited number of rules humans do. Computers’ ability to process information is limited due to the inability to think beyond the realm of rules. Humans on the other hand, are more creative and capable when they are presented with concepts and ideas outside the normal function of predetermined guidelines. The user is like Amphidromous fish, the computer operating model is like freshwater or saltwater fish that can only function in one or the other environment. In other words, computers don’t have the elements necessary to switch, between freshwater and saltwater, unlike humans, computers are limited to one predetermined environment. When humans are forced to think solely in a freshwater or solely in a saltwater environment, we are limiting the usage of our brain capabilities. Guidelines can be set to help us follow a similar set of standards but it in no way should these guidelines force us to similar interpretations. To me, rules and guidelines should always allow for amphidromous migrations between freshwater and saltwater environments.
ReplyDeleteThanks for your interesting insights!
ReplyDelete