Answering an Objection: “You Can’t Measure Intelligent Design”
- Casey Luskin
- Universal probability bound: 10-150 (or 498 bits)
- Galactic probability bound: 10-96 (or 319 bits)
- Solar System probability bound: 10-85 (or 282 bits)
- Earth probability bound: 10-70 (or 232 bits)
An objection to intelligent design (ID) that I’ve heard for many years claims that ID can’t be considered science because “you can’t measure intelligent design.” Former director of the anti-ID National Center for Science Education Eugenie Scott used to say that without a hypothetical device she called a “theo-meter,” she did not know how to detect if God was at work. Even some scientists who are sympathetic to design arguments have wondered how we can detect design if we can’t “measure design like we measure the amount of some substance in a test tube.”
The answer to these objections is that we test intelligent design in the same way that we test all historical scientific theories: by looking in nature for known effects of the cause in question (in this case, intelligent agency), and showing that this cause (again, intelligent agency) is the best explanation for the observed data. If that answer seemed a little bit technical or unclear, let me explain so that it makes more sense. We’ll see how precise quantitative measurements can in fact help us to detect design.
How Historical Sciences Work
Historical scientists who study fields like geology, evolutionary biology, cosmology, or intelligent design can’t put history into a test tube. They can’t measure what happened in the past like we might directly chemically measure the amount of some substance in a solution in the present. That doesn’t mean we can’t use scientific methods to study the past. It just means we have to use different methods in the historical sciences (which study what happened in the past) than we use in the empirical sciences (which study how things operate in the present). To claim that intelligent design isn’t science because we can’t directly “measure it in a test tube” is to misunderstand how historical science work, and to apply an unfair standard to intelligent design.
Stephen Jay Gould observed that historical sciences “infer history from its results.” Historical sciences (like Darwinian evolution and intelligent design) rely on the principle of uniformitarianism, which holds that “the present is the key to the past.” Under this methodology, scientists study causes at work in the present-day world in order, as the famous early geologist Charles Lyell put it, to explain “the former changes of the Earth’s surface” by reference “to causes now in operation.”
Historical scientific theories thus begin by studying causes at work in the present-day natural world and understanding their known effects. They then examine the historical record as preserved in nature to find those known effects. When those known effects are found in the historical record, and those effects can only be explained by a given cause we’ve studied in the present day, then we infer that the cause was at work in the past.
An Everyday Example
Imagine that you took your 4×4 truck off-roading and it comes back covered in mud. You drop the truck off at a carwash, and an hour later return to pick it up. How could you apply the scientific method of historical sciences to determine if the car was washed? Well, you could make predictions about what you’d expect to find if the car was washed, and then you could test those predictions.
For example, if the car was washed then you might predict that there will be no major chunks of mud left on the exterior. This prediction could be tested by a simple visual analysis. If you see many chunks of mud remaining, that would refute your hypothesis that the car was washed. You could also undertake a more technical analysis, predicting that if the car was washed then there should be small amounts of soap residue left on the paint surface. You could scrape material off the surface of the car and perform a chemical analysis to confirm or refute this hypothesis. If you find that there are no chunks of mud on the car, and soap residue is present on the car’s paint, you would have positive evidence that the car was washed.
As a historical scientific theory, intelligent design works in much the same way.
Detecting Design
The theory of intelligent design employs scientific methods commonly used by other historical sciences to conclude that certain features of the universe and living things are best explained by an intelligent cause, not an undirected process such as natural selection. Intelligent agency is a cause “now in operation” which can be studied in the world around us. Thus, as a historical science, ID employs the principle of uniformitarianism. It begins with present-day observations of how intelligent agents operate, and then converts those observations into positive predictions of what scientists should expect to find if a natural object arose by intelligent design.
For example, mathematician and philosopher William Dembski observes that “[t]he principal characteristic of intelligent agency is directed contingency, or what we call choice.” According to Dembski, when an intelligent agent acts, “it chooses from a range of competing possibilities” to create some complex and specified event. Thus, the type of information that reliably indicates intelligent design is called “specified complexity” or “complex and specified information,” “CSI” for short.
In brief, something is complex if it’s unlikely, and specified if it matches an independently derived pattern. In using CSI to detect design, Dembski calls ID “a theory of information” where “information becomes a reliable indicator of design as well as a proper object for scientific investigation.” ID theorists positively infer design by studying natural objects to determine if they bear the type of information that in our experience arises from an intelligent cause.
Human intelligence provides a large empirical dataset for studying what is produced when intelligent agents design things. For example, language, codes, and machines are all structures containing high CSI. In our experience these things always derive from an intelligent mind. By studying the actions of humans we can understand what to expect to find when an intelligent agent has been at work, allowing us to construct positive, testable predictions about what we should find if intelligent design is present in nature. High CSI thus reliably indicate the prior action of intelligence.
Ruling Out Material Causes
Finding the known effects of intelligent agency (i.e., high CSI) fulfills a testable prediction of intelligent design and shows that ID can be positively supported by scientific evidence. But to shore up our conclusion that intelligent design is the best explanation for some feature we should also rule out other material causes and show that design alone can account for the data. ID theorists have developed methods for doing this.
William Dembski proposed one way to rule out material causes; it is to show that the likelihood of an event happening mechanistically is below what he calls the “universal probability bound.” Essentially, the universal probability bound is an estimate of the maximum number of events that are possible in the history of the universe given all known probabilistic resources. It grants the overly generous assumption to material mechanisms that every elementary particle has been interacting at every unit of Planck time over the entire history of the universe. Dembski explains this concept with Jonathan Witt in their book Intelligent Design Uncensored:
Scientists have learned that within the known physical universe there are about 1080 elementary particles … Scientists also have learned that a change from one state of matter to another can’t happen faster than what physicists call the Planck time. … The Planck time is 1 second divided by 1045 (1 followed by forty-five zeroes). … Finally, scientists estimate that the universe is about fourteen billion years old, meaning the universe is itself is millions of times younger than 1025 seconds. If we now assume that any physical event in the universe requires the transition of at least one elementary particle (most events require far more, of course), then these limits on the universe suggest that the total number of events throughout cosmic history could not have exceed 1080 x 1045 x 1025 = 10150.
This means that any specified event whose probability is less than 1 chance in 10150 will remain improbable even if we let every corner and every moment of the universe roll the proverbial dice. The universe isn’t big enough, fast enough or old enough to roll the dice enough times to have a realistic chance of randomly generating specified events that are this improbable.
WILLIAM DEMBSKI AND JONATHAN WITT, INTELLIGENT DESIGN UNCENSORED: AN EASY-TO-UNDERSTAND GUIDE TO THE CONTROVERSY, PP. 68-69 (INTERVARSITY PRESS, 2010)
Of course 10150 represents the “probability bound” for the entire universe, but when we consider the number of elementary particles and time available for different zones of the universe, we obtain the following probability bounds, as well as the information content they represent, measured in bits:
Using Measurements to Detect Design
Why are these probability bounds important? Well, we can measure the probability of various natural features arising through mechanistic causes alone, and then we can convert that probability into an amount of complex and specified information measured in bits. We can then compare that result with various probability bounds (as above). By applying the proper probability bound, we can determine if there are sufficient probabilistic resources for the structure to arise naturally. Or to put it another way, we can determine if the structure is likely to have originated by naturalistic means
If the likelihood of the structure arising is below its relevant probability bound (or to put it another way, its CSI content is above the calculated limits of the information-generative power of natural processes), then a materialistic origin of that structure is effectively falsified. If the number of bits exceeds the relevant probability bound, we have a very good case for intelligent design.
A quick illustration: Douglas Axe’s research on proteins found that the likelihood of a random sequence of amino acids yielding a functional beta-lactamase enzyme is less than 1 in 1074. That is equivalent to 245 bits of CSI. Since this feature had to have arisen on Earth, the Earth probability bound is the relevant threshold for understanding what natural causes can do. And the Earth probability bound is 10-70 (or 232 bits). Thus, the amount of CSI in the beta-lactamase enzyme exceeds the Earth probability bound. The best explanation is design.
So it’s true we don’t directly “measure” intelligent design in a test tube. But we can use measurements and calculations to detect design. If we calculate and measure that a structure contains more CSI than can arise by the relevant probabilistic resources available for a naturalistic origin of the structure, then we can detect design.
No comments:
Post a Comment