Intuition, simply put, is a gut feeling. It could be based on prior knowledge, pattern recognition, an unconscious reaction, even superstition. It is useful in making quick decisions on the spot, say, when you are alone in the jungle and hear rustling in the bushes. But in reality, it is a lousy basis for important decisions.
Let’s look at this example.
Imagine a fictional Foobar disease, which is always fatal, not common but not overly rare either, with an overall occurrence of 0.1%. There is a test that is exceptionally sensitive (100%), which means that if you have the disease, this test will definitely identify it. The test also has a very low false positive rate of 1% (99% specificity).
Out of curiosity, you take the test. It turns out positive. Ouch.
Quick! Based on your gut feeling, what are the chances that you have this fatal Foobar disease?
The correct answer is around 9%. The approximate calculation is as follows (for exact calculations use Bayes’ theorem):
Out of 1000 people, only 1 will actually have the disease (0.1%). The test, with a false positive rate of 1%, is expected to incorrectly identify 10 people as having the disease, along with the 1 person that actually has the disease. Out of the 11 people identified as positive, only 1 will actually have it.
Counterintuitive, but true.
Now try telling that to the people that just tested positive for Foobar and blew their entire life savings at the casino.
When the US Preventative Services Task Force changed the guidelines for mammogram screenings, it was based on scientific evidence. Same thing with prostate cancer screenings (PSA test). The test intervals were lengthened (or eliminated) because there was no evidence that it actually provided actual benefit in the general (not high-risk) population. The public immediately fired back, simply because it is highly counterintuitive: how on earth could someone oppose extra testing? Conspiracy theories immediately surfaced and the issue soon became a political issue instead of a fact-based discussion.
It is unrealistic to expect everyone to look into and fully understand the underlying reasons, not because of intellectual laziness, but because those reasons often lie outside their realm of expertise. Sadly enough, the most vocal opinions are usually shouted out by those that understand the least. And although often treated otherwise by the media, volume does not equal correctness, understanding, controversy, much less consensus. And as elitist as it may sound, I believe that knowledge is not a democracy, and public policy (especially on complex scientific issues) should be debated and guided by relevant experts, not by popular vote.
Scientists are generally the least confrontational and least vocal group, and politically have the least influence. And let’s face it, the jargon-laden, carefully crafted, highly qualified statements that are spewed from their facial orifices don’t exactly appeal to voters. So politically, are we doomed, in a Darwinian sense? I’ll go out on a limb and say no, because although suboptimal, thankfully and ironically, ignorance is global. Politicians everywhere are elected by popularity and not intelligence or expertise, and dictators do not rule because of oversized brains. We are no worse off if everyone else is equally as bad. At least that is my intuition.
* afterword: Putting the issue of limited resources and fairness aside, I am not opposed against extra testing, provided that the person fully understands the implications, risks, and what the test results actually mean, if anything. I do oppose unnecessary testing, which I define as any test that will not change the course of action. It makes no more sense to rearrange the deck furniture on the Titanic than it does to disinfect the death row inmate’s arm before giving him a lethal injection, or to order a Pap smear for a 90 year old.