The main reason is that most people have a lot more near misses than they have crashes, which has two effects. First, it allows for much finer comparisons of driving ability. Second, because there's more of them, the Central Limit Theorem has more time to kick in. This means that your number of near misses per year will likely be closer to your true average rate of generating near misses, while your number of accidents per year is likely to be further away from your true mean.
I was thinking about this the other day when I was a passenger in the Cuban Mafiosa's car as it veered towards a concrete barrier near a freeway exit.
After barely controlling my urge to swear and gesticulate, I was reminded of this old but awesome Slate interview with James Bagian. That guy kicks @$$. He implemented a similar system of analysing near misses when he was tasked with reducing medical errors at a VA hospital. It's seriously one of the best articles I've read this year. Check out some money quotes from the article:
James Bagian on how medicine deals with errors:
Take a very simple example: A nurse gives the patient in Bed A the medicine for the patient in Bed B. What do you say? "The nurse made a mistake"? That's true, but then what's the solution? "Nurse, please be more careful"? Telling people to be careful is not effective. Humans are not reliable that way. Some are better than others, but nobody's perfect. You need a solution that's not about making people perfect.
So we ask, "Why did the nurse make this mistake?" Maybe there were two drugs that looked almost the same. That's a packaging problem; we can solve that. Maybe the nurse was expected to administer drugs to ten patients in five minutes. That's a scheduling problem; we can solve that. And these solutions can have an enormous impact.
Seven to 10 percent of all medicine administrations involve either the wrong drug, the wrong dose, the wrong patient, or the wrong route. Seven to 10 percent. But if you introduce bar coding for medication administration, the error rate drops to one tenth of one percent. That's huge.James Bagian on what it felt like to be the substitute astronaut who was meant to be on the Challenger Space Shuttle, watching it explode from the ground:
Was I sad that it happened? Of course. Was I surprised? Not really. I knew it was going to happen sooner or later—and not that much later. At the time, the loss rate was about 4 percent, or one in 25 missions. Challenger was the 25th mission. That's not how statistics works, of course—it's not like you're guaranteed to have 24 good flights and then one bad one, it just happened that way in this case—but still, you think we're going to fly a bunch of missions with a 4 percent failure rate and not have any failures? You gotta be kidding.I present James Bagian with the Thomas Bayes Award for really, truly understanding probability.
I'd quote the whole thing, but really you should just click here.
And in honour of Mr Bagian's award, you should stay out of the cars of people who have near misses while driving!
No comments:
Post a Comment