Ken Orr on Malcolm Gladwell’s “Blink”
June 21, 2007
I’ve often written about Malcolm Gladwell’s work in my articles and in this blog, since so much of what he says also pertains to life in information technology. When I saw how my friend Ken Orr weighed in on Gladwell’s book “Blink”, I just had to excerpt it here. Ken’s always incredibly insightful. If you want to get more of what he has to say on a regular basis, sign up for a trial of the Cutter Trends Advisory at www.cutter.com. You’ll be “glad” you did.
Passing the Sniff Test
by Ken Orr, Fellow, Cutter Business Technology Council
I recently read the book Blink by Malcolm Gladwell. I had read pieces of his earlier book, The Tipping Point , and I also heard him on C-SPAN talking about his new book. Blink is about the instantaneous knowledge that we used to refer to as “intuition.” In his book, Gladwell takes advantage of modern research into human intelligence. He begins with an interesting case. The incident has to do with a supposedly ancient Greek statute on which the Getty Museum performed extensive scientific due diligence and then purchased for US $10 million. Following the purchase, the museum invited a number of period experts to examine its new acquisition. One after another, the experts questioned the authenticity of the statue. The museum, though shaken, drew comfort in its detailed scientific study. But in the end, the experts were right and the scientists were wrong.
Gladwell’s book has many other examples in which our unconscious (subconscious) minds make judgments about complex inputs much, much faster than our conscious minds. While this knowledge often leads to problems, Gladwell points out that we need to understand what the unconscious mind is trying to tell us and try to use it better.
I was thinking of Blink as I was recently talking to a client who had been involved in a project review of a very large project by a series of outside experts. The group was made up of about 20 experts in nearly every area of software analysis, development, and management. At the end of the first day, the group had nearly unanimous agreement that the project was in trouble. The group also listed its reasons for its conclusion.
The review process wandered about, as did the project. A couple years later, the project was finally canceled. During that recent conversation with the person who had been at the first meeting, this comment jumped out at me: “I wrote down the observations of the review group at the end of their first day,” he said. “I went back when the project failed and dug up my notes from that meeting. Those observations were spot on! How come our internal controls didn’t tell us the same thing?”
I mentioned this to another person who had been on the expert panel; his answer was blunt: “It was the sniff test!” he said. “When you’ve been the business as long as the members of that team had been, you know immediately when things are not going right. The signs that things were off track were everywhere and the signs that they were going to make it were nowhere to be seen.”
I knew exactly what he meant. Large system failures tend to look alike. They typically make all their immediate deliverable dates with flying colors and then slowly start showing signs of stress. Then, the project starts missing serious due dates and top management starts looking for outside opinion.
I realized as I was reading Blink that I rely on my instantaneous knowledge when I’m doing systems reviews; unfortunately, I’m rarely wrong. I’m so convinced that this is not a unique talent that I think I would not want to employ a project manager or chief architect for a large project who didn’t have the right nose.