Introducing the Richards Scale – Your Tool for Evaluating Grounds for Science Skepticism
David Klinghoffer | @d_klinghoffer
Writing at the NPR science blog 13.7, U.C. Berkeley psychology professor Tania Lombrozo is concerned about “skepticism.”
Calling someone a “skeptic” can be a term of praise or condemnation.
That is true, and interesting. When is skepticism appropriate, and when not? Sometimes it’s an easy call. Hearing that someone doubts the moon landing, for example, and thinks it’s some sort of U.S. government conspiracy, I would rapidly lose interest in whatever he had to say after that. Where do you draw the line, though?
Lombrozo says of herself, “I might praise skepticism towards homeopathic medicine, but disdain skepticism towards human evolution.” “Disdain” is a strong word, and a telling one.
To skepticism, she prefers “truth-tracking and humility.”
Truth-tracking is about getting things right: identifying the signal amidst the noise. We don’t want to be fooled by noise (about a link between vaccines and autism, for example), but we also don’t want to miss out on signal (about the real benefits of vaccination). Truth-tracking isn’t (only) about rejecting noise, but about differentiating signal from noise.
Humility is about recognizing the possibility for error, and therefore holding beliefs tentatively (or “defeasibly“). But recognizing uncertainty doesn’t mean that all bets are off. Some bets are still much better than other bets. You don’t know who will win the next horserace, for example, but that doesn’t mean that you’d assign equal probabilities to all contenders. Similarly, we can quantify uncertainty by assigning degrees of belief to different propositions. I might think that life on other planets is unlikely, and that ESP is unlikely, yet assign a much higher probability to the former than to the latter. Similarly, I might think that rain tomorrow and human evolution are highly likely, but assign a much higher probability to the latter than to the former.
“Getting things right,” and “recognizing the possibility for error.” That’s very nice. We should all strive for both…but they’re a bit vague for practical implementation. Where would those virtues leave you, confronted with the question of, for instance, the scientific “consensus” on Darwinian evolution?
Speaking at the Heritage Foundation recently, Discovery Institute Senior Fellow Jay Richards addressed the topic, offering a much more practical set of guidelines for deciding when doubt is in order. It’s a new podcast episode of ID the Future. Listen to it here, or download it here.
Dr. Richards, who leads the excellent news site The Stream, has 12 triggers for your skepticism. Think twice about a “proposed consensus”:
“When different claims get bundled together”
“When ad hominem attacks against dissenters predominate”
“When scientists are pressured to toe the party line”
“When publishing and peer review in the discipline is cliquish”
“When dissenters are excluded from the peer-reviewed journals not because of weak evidence or bad arguments but to marginalize them”
“When the actual peer-reviewed literature is misrepresented”
“When consensus is declared before it even exists”
“When the subject matter seems, by its nature, to resist consensus”
“When ‘scientists say’ or ‘science says’ is a common locution”
“When it is being used to justify dramatic political or economic policies”
“When the ‘consensus’ is maintained by an army of water-carrying journalists who defend it with partisan zeal, and seem intent on helping certain scientists with their messaging rather than reporting on the field as fairly as possible”
“When we keep being told that there’s a scientific consensus”
He elaborates on these further, here. I love this as a practical tool for the citizen and science consumer. In fact, with a tip of the hat to Jay, I would propose a scale for quantifying grounds for skepticism – the Richards Scale. For any controversial scientific claim, you count up the clicks on those 12 reasons for doubt, giving you a number between 1 and 12. An idea could register, for example, 3 or 4 on the Richards Scale – prompting mild doubt – or an 11 or 12 – where something fishy is almost certainly going on.
Richards observes:
There’s always this crank somewhere available immediately online that doubts any particular scientific idea, and so the fact that there’s skeptics is not itself an argument against the proposed consensus.
Right, and one does not want to be that crank. Which is why, as he says, you need to examine the sociological dynamics underlying the “consensus.” Tania Lombrozo with her “disdain” for doubts about evolution doesn’t help much with that. The Richards Scale does!
David Klinghoffer | @d_klinghoffer
Writing at the NPR science blog 13.7, U.C. Berkeley psychology professor Tania Lombrozo is concerned about “skepticism.”
Calling someone a “skeptic” can be a term of praise or condemnation.
That is true, and interesting. When is skepticism appropriate, and when not? Sometimes it’s an easy call. Hearing that someone doubts the moon landing, for example, and thinks it’s some sort of U.S. government conspiracy, I would rapidly lose interest in whatever he had to say after that. Where do you draw the line, though?
Lombrozo says of herself, “I might praise skepticism towards homeopathic medicine, but disdain skepticism towards human evolution.” “Disdain” is a strong word, and a telling one.
To skepticism, she prefers “truth-tracking and humility.”
Truth-tracking is about getting things right: identifying the signal amidst the noise. We don’t want to be fooled by noise (about a link between vaccines and autism, for example), but we also don’t want to miss out on signal (about the real benefits of vaccination). Truth-tracking isn’t (only) about rejecting noise, but about differentiating signal from noise.
Humility is about recognizing the possibility for error, and therefore holding beliefs tentatively (or “defeasibly“). But recognizing uncertainty doesn’t mean that all bets are off. Some bets are still much better than other bets. You don’t know who will win the next horserace, for example, but that doesn’t mean that you’d assign equal probabilities to all contenders. Similarly, we can quantify uncertainty by assigning degrees of belief to different propositions. I might think that life on other planets is unlikely, and that ESP is unlikely, yet assign a much higher probability to the former than to the latter. Similarly, I might think that rain tomorrow and human evolution are highly likely, but assign a much higher probability to the latter than to the former.
“Getting things right,” and “recognizing the possibility for error.” That’s very nice. We should all strive for both…but they’re a bit vague for practical implementation. Where would those virtues leave you, confronted with the question of, for instance, the scientific “consensus” on Darwinian evolution?
Speaking at the Heritage Foundation recently, Discovery Institute Senior Fellow Jay Richards addressed the topic, offering a much more practical set of guidelines for deciding when doubt is in order. It’s a new podcast episode of ID the Future. Listen to it here, or download it here.
Dr. Richards, who leads the excellent news site The Stream, has 12 triggers for your skepticism. Think twice about a “proposed consensus”:
“When different claims get bundled together”
“When ad hominem attacks against dissenters predominate”
“When scientists are pressured to toe the party line”
“When publishing and peer review in the discipline is cliquish”
“When dissenters are excluded from the peer-reviewed journals not because of weak evidence or bad arguments but to marginalize them”
“When the actual peer-reviewed literature is misrepresented”
“When consensus is declared before it even exists”
“When the subject matter seems, by its nature, to resist consensus”
“When ‘scientists say’ or ‘science says’ is a common locution”
“When it is being used to justify dramatic political or economic policies”
“When the ‘consensus’ is maintained by an army of water-carrying journalists who defend it with partisan zeal, and seem intent on helping certain scientists with their messaging rather than reporting on the field as fairly as possible”
“When we keep being told that there’s a scientific consensus”
He elaborates on these further, here. I love this as a practical tool for the citizen and science consumer. In fact, with a tip of the hat to Jay, I would propose a scale for quantifying grounds for skepticism – the Richards Scale. For any controversial scientific claim, you count up the clicks on those 12 reasons for doubt, giving you a number between 1 and 12. An idea could register, for example, 3 or 4 on the Richards Scale – prompting mild doubt – or an 11 or 12 – where something fishy is almost certainly going on.
Richards observes:
There’s always this crank somewhere available immediately online that doubts any particular scientific idea, and so the fact that there’s skeptics is not itself an argument against the proposed consensus.
Right, and one does not want to be that crank. Which is why, as he says, you need to examine the sociological dynamics underlying the “consensus.” Tania Lombrozo with her “disdain” for doubts about evolution doesn’t help much with that. The Richards Scale does!
No comments:
Post a Comment