from the as-if-that's-possible dept
Duncan Watts has a thought provoking writeup in the Boston Globe talking about
the problems of systematic risk, and why no one could successfully see exactly how the various dominoes would fall, leading to our current (and still ongoing) economic financial crisis. Basically, his argument is that the system has become too intertwined and complex, such that no one can really manage the risk. This is hardly a new idea. Watts' suggestion (which, again, is not necessarily new, and has been discussed by many, including Treasury Secretary Tim Geithner) is that perhaps we need a "systematic risk manager" within the government, whose job (like anti-trust folks) is to look at various companies and determine if they're too big to fail -- and then see how to change things such that they're no longer too big to fail.
It's a nice idea... in theory. In practice, it's a lot harder. The very
reason systematic risk is such a problem is that it's so hard to even imagine the scenarios taking place. The idea that Lehman Bros. failing would have so much impact elsewhere is simply beyond the scope of what most people could have even imagined -- and that would almost certainly include any "systematic risk manager." While I agree that it's a problem that we end up with companies that are "too big to fail," I tend to think, in the long run, it's futile to try to predict ahead of time who's really "too big to fail," but that such an issue should only come up
in the event of a gov't bailout. Thus,
if you need to take gov't money to stay alive
because you are deemed "too big to fail,"
then it should be required that as a part of the terms of the deal, you need to work out a plan that makes you small enough to fail.
Otherwise, you end up in a situation where companies who are successful get penalized for it. The only time "too big to fail" is a problem is
when such a company fails. We shouldn't necessarily be penalizing a company that's too big to fail
if it's not going to fail.
Separately, Watts notes that this idea of trying to prevent "too big to fail" is a way of avoiding systematic risk. I'd argue he has the equation a bit twisted. Too big to fail isn't the problem. It's the hidden risk that leads a company that is "too big to fail" to fail that's the problem. The answer to that is not breaking up successful companies -- it's increasing transparency into actual risk. That means increasing openness and data sharing, rather than the status quo of quarterly reports with the real details hidden and buried beneath complexities, combined with Wall Street putting together packages whose sole purpose is designed to
hide the actual risk. Make the real data transparent (and real-time) and let anyone access and mess around with the data, and you get a
much more accurate view of the risk, and you avoid situations where "healthy" investments suddenly turn sour.
Watts has the right idea that systematic risk is a problem, but the wrong solution. Companies that are too big to fail failing is a symptom of a lack of transparency over the actual risk. The answer isn't to stop companies from getting so big. It's to provide more transparency into the actual risk.
Filed Under: radical transparency, risk, systematic risk, too big to fail, transparency