Bad Design Inferences Can Land Innocent People in Jail
Evolution News | @DiscoveryCSC
We’ve noted forensic science in previous discussions of sciences that show intelligent design in action
(along with archaeology, informatics, cryptology, and others). A good forensic analyst can determine where a particular uranium ore came from in Africa, even if it is found in North Korea. Crime labs routinely piece together clues to separate natural from intelligent causes in murder cases, and calculate the probabilities that clues are not due to chance.
When there is strong motivation to find a particular outcome, however, forensics can not only yield wrong answers, but put innocent people in jail. Courtrooms have long trusted forensic analysts as expert witnesses. Highly motivated prosecuting attorneys try to wring confident assertions from their expert witnesses about DNA matches to a suspect, ammunition links to his weapon, and the like. Often, defense attorneys lack the expertise to counter the assertions, and a jury can be swayed by what appears to be strong evidence of guilt.
In Nature, Robin Mejia argues for labeling the limits of forensic science. As a forensic scientist herself and a member of the Center for Statistics and Applications in Forensic Evidence (a consortium of four universities that aims to close holes in statistical analyses of pattern-matching evidence), she knows of many horror stories of innocent people wrongly convicted.
In 2005 I produced a documentary showcasing several cases in which flawed forensic analyses helped to get innocent people locked up. Riky Jackson went behind bars for two years because of incorrectly matched fingerprints. Jimmy Ray Bromgard spent nearly 15 years in jail, mainly because of hair comparisons that lacked scientific rigour. Now I’m a scientist who uses data analysis to promote human rights, and I’m disheartened to see these errors continue. That is why I hope that a US federal commission will vote next week to endorse practices that would transform how forensic analysts talk about evidence.
This would reduce the number of innocent people sent to prison. Consider Crystal Weimer, a single mother of three whose murder conviction was largely based on assertions that wounds on a dead man’s hand were made by Weimer’s teeth. Last June, after a multi-year, multi-lawyer saga, all charges against her were dismissed. [Emphasis added.]
Would these errors have been prevented by proper application of the Design Filter? As with criminal justice, natural causes are “innocent till proven guilty” of intelligent design. The burden of proof is on the forensic analyst to show that a given phenomenon could not have happened by chance. Only through sufficiently small probabilities can chance be eliminated. Coincidences do happen. This month, BBC News reported that a lucky couple won the lottery three times: in 1989, in 2010, and again this year.
Mejia lists what the proposals would do to tighten up loose design inferences:
The proposals that will be put to a vote on 10 April lay out how forensic analysts should testify about evidence such as shoeprints, bullet ballistics, blood spatter and glass shards. Analysts must explain how they examined evidence and what statistical analyses they chose. They must also describe inherent uncertainties in their measurements. Most importantly, experts must never claim with certainty that anything found at a crime scene is linked to a suspect, and they must always try to quantify the probability that observed similarities occurred by chance.
In forensics, that probability can be hard to calculate.
Even if scientists can objectively quantify the similarities between evidence from a crime scene and evidence from a suspect, no one knows how often such matches would occur by chance. Suppose striations on a bullet from a crime scene resemble those from a bullet test-fired from a suspect’s gun. How frequently would bullets from other guns have similar markings? Except for some types of DNA samples, just about every type of forensic comparison lacks that information.
She did not elaborate on which “DNA samples” are more amenable to eliminating chance when analyzing similarities, but that’s interesting. Clearly, some types of evidence can eliminate chance with much greater certainty.
One major boost for certainty in a design inference is the magnitude of the improbability of chance. In their recent film Origin, Illustra Media used Biologic Institute scientist Doug Axe’s calculation of chance generating a single functional protein of 100 amino acids in length, under ideal conditions, as 1 in 10 to the 161st power. Such an inconceivable number exceeds William Dembski’s “Universal Probability Bound” (1 in 10 to the 150th power) by 11 orders of magnitude — 100 billion times less probable. Clearly, if something is so improbable it will never ever happen in the entire universe, it’s not going to happen if it is 100 billion times less probable!
A sharp defense attorney might cross-examine the forensic analysis with pointed questions: How do you know it is that improbable? How was this figure calculated? Axe would explain his methods for measuring the degree of functional space within configuration space for proteins of that length. He would explain, additionally, that the amino acids have to form peptide bonds, not just any bond. And they would have to be left-handed. Writing on the whiteboard in court, he could justify his calculation. He might even show that his value underestimates the real improbability.
But even if Axe were off by billions, or indeed trillions or quadrillions or septillions, he could still convincingly eliminate chance with auxiliary calculations. Obviously, he could tell the jury, one protein is not alive. The simplest known living cell has over 300 different proteins. Discovery fellow Paul Nelson emphasizes this point in the film. Even if against all odds the single protein assembles by chance, the improbability ramps up much further when you factor in all the other requirements for a self-replicating cell. Tim Standish rubs it in by explaining that peptide bonds do not form in water anyway, and Biologic Institute scientist Ann Gauger closes all the other loopholes that origin-of-life materialists try to use to get around the vast improbability.
This, friends, is the level of certainty to be had in the design inference for life. Other intelligent-design sciences actually fall far short of this level of certainty. Whether in forensics, optimization, SETI, or one’s own experience, inferences to design as the best explanation will always leave some room for doubt. The jury in a murder case needs to maintain the accused’s innocence till proven guilty, and only convict when the evidence is “beyond reasonable doubt.”
There is no reasonable doubt that the origin of life occurred by design. One has to believe in miracles upon miracles to say chance could surmount such enormous, unthinkable, preposterous improbabilities. Scientists don’t reject design in cases involving far, far less robust calculations. Even a hiker infers design intuitively when seeing three rocks stacked on top of each other. How much more should one recognize design when the probability of chance is so absurdly low?
Evolution News | @DiscoveryCSC
We’ve noted forensic science in previous discussions of sciences that show intelligent design in action
(along with archaeology, informatics, cryptology, and others). A good forensic analyst can determine where a particular uranium ore came from in Africa, even if it is found in North Korea. Crime labs routinely piece together clues to separate natural from intelligent causes in murder cases, and calculate the probabilities that clues are not due to chance.
When there is strong motivation to find a particular outcome, however, forensics can not only yield wrong answers, but put innocent people in jail. Courtrooms have long trusted forensic analysts as expert witnesses. Highly motivated prosecuting attorneys try to wring confident assertions from their expert witnesses about DNA matches to a suspect, ammunition links to his weapon, and the like. Often, defense attorneys lack the expertise to counter the assertions, and a jury can be swayed by what appears to be strong evidence of guilt.
In Nature, Robin Mejia argues for labeling the limits of forensic science. As a forensic scientist herself and a member of the Center for Statistics and Applications in Forensic Evidence (a consortium of four universities that aims to close holes in statistical analyses of pattern-matching evidence), she knows of many horror stories of innocent people wrongly convicted.
In 2005 I produced a documentary showcasing several cases in which flawed forensic analyses helped to get innocent people locked up. Riky Jackson went behind bars for two years because of incorrectly matched fingerprints. Jimmy Ray Bromgard spent nearly 15 years in jail, mainly because of hair comparisons that lacked scientific rigour. Now I’m a scientist who uses data analysis to promote human rights, and I’m disheartened to see these errors continue. That is why I hope that a US federal commission will vote next week to endorse practices that would transform how forensic analysts talk about evidence.
This would reduce the number of innocent people sent to prison. Consider Crystal Weimer, a single mother of three whose murder conviction was largely based on assertions that wounds on a dead man’s hand were made by Weimer’s teeth. Last June, after a multi-year, multi-lawyer saga, all charges against her were dismissed. [Emphasis added.]
Would these errors have been prevented by proper application of the Design Filter? As with criminal justice, natural causes are “innocent till proven guilty” of intelligent design. The burden of proof is on the forensic analyst to show that a given phenomenon could not have happened by chance. Only through sufficiently small probabilities can chance be eliminated. Coincidences do happen. This month, BBC News reported that a lucky couple won the lottery three times: in 1989, in 2010, and again this year.
Mejia lists what the proposals would do to tighten up loose design inferences:
The proposals that will be put to a vote on 10 April lay out how forensic analysts should testify about evidence such as shoeprints, bullet ballistics, blood spatter and glass shards. Analysts must explain how they examined evidence and what statistical analyses they chose. They must also describe inherent uncertainties in their measurements. Most importantly, experts must never claim with certainty that anything found at a crime scene is linked to a suspect, and they must always try to quantify the probability that observed similarities occurred by chance.
In forensics, that probability can be hard to calculate.
Even if scientists can objectively quantify the similarities between evidence from a crime scene and evidence from a suspect, no one knows how often such matches would occur by chance. Suppose striations on a bullet from a crime scene resemble those from a bullet test-fired from a suspect’s gun. How frequently would bullets from other guns have similar markings? Except for some types of DNA samples, just about every type of forensic comparison lacks that information.
She did not elaborate on which “DNA samples” are more amenable to eliminating chance when analyzing similarities, but that’s interesting. Clearly, some types of evidence can eliminate chance with much greater certainty.
One major boost for certainty in a design inference is the magnitude of the improbability of chance. In their recent film Origin, Illustra Media used Biologic Institute scientist Doug Axe’s calculation of chance generating a single functional protein of 100 amino acids in length, under ideal conditions, as 1 in 10 to the 161st power. Such an inconceivable number exceeds William Dembski’s “Universal Probability Bound” (1 in 10 to the 150th power) by 11 orders of magnitude — 100 billion times less probable. Clearly, if something is so improbable it will never ever happen in the entire universe, it’s not going to happen if it is 100 billion times less probable!
A sharp defense attorney might cross-examine the forensic analysis with pointed questions: How do you know it is that improbable? How was this figure calculated? Axe would explain his methods for measuring the degree of functional space within configuration space for proteins of that length. He would explain, additionally, that the amino acids have to form peptide bonds, not just any bond. And they would have to be left-handed. Writing on the whiteboard in court, he could justify his calculation. He might even show that his value underestimates the real improbability.
But even if Axe were off by billions, or indeed trillions or quadrillions or septillions, he could still convincingly eliminate chance with auxiliary calculations. Obviously, he could tell the jury, one protein is not alive. The simplest known living cell has over 300 different proteins. Discovery fellow Paul Nelson emphasizes this point in the film. Even if against all odds the single protein assembles by chance, the improbability ramps up much further when you factor in all the other requirements for a self-replicating cell. Tim Standish rubs it in by explaining that peptide bonds do not form in water anyway, and Biologic Institute scientist Ann Gauger closes all the other loopholes that origin-of-life materialists try to use to get around the vast improbability.
This, friends, is the level of certainty to be had in the design inference for life. Other intelligent-design sciences actually fall far short of this level of certainty. Whether in forensics, optimization, SETI, or one’s own experience, inferences to design as the best explanation will always leave some room for doubt. The jury in a murder case needs to maintain the accused’s innocence till proven guilty, and only convict when the evidence is “beyond reasonable doubt.”
There is no reasonable doubt that the origin of life occurred by design. One has to believe in miracles upon miracles to say chance could surmount such enormous, unthinkable, preposterous improbabilities. Scientists don’t reject design in cases involving far, far less robust calculations. Even a hiker infers design intuitively when seeing three rocks stacked on top of each other. How much more should one recognize design when the probability of chance is so absurdly low?