Rosenhouse’s Whoppers: Appealing to the Unwashed Middle
William A. Dembski
I am responding again to Jason Rosenhouse about his book The Failures of Mathematical Anti-Evolutionism. See my earlier post here.
Before leaving academia for business, I used to lecture on intelligent design at colleges and universities, and often debate people on the Darwinian side. Michael Shermer and Michael Ruse were my most frequent debate partners. My philosophy at these debates was not to try to convince Darwinists that my views were correct. Nor was I particularly concerned about the intelligent design proponents — if they were proponents of ID, they had presumably put their necks on the chopping block and knew what was at stake, academically and culturally, in taking the side of ID. My challenge, rather, in these debates, was to win the unwashed middle — those who had not made up their minds — those who didn’t reside in the cloud cuckoo land of Darwinism. So this response is mainly directed at them.
Rosenhouse’s book is objectively bad. It purports to be a critique of mathematics as used by ID proponents and of my mathematical work in particular. Yet it betrays a lack of comprehension throughout. It makes a virtue of misrepresentation. It’s aim is not to understand but to kill. In my review, I called Rosenhouse on his many failures in the book. It’s clear in his reply that he simply ignored the points I was able to score — points he made it easy for me to score because he did such a hack job. Read his book and read my review, and decide for yourself.
A New Dimension of Bad
His reply, however, adds a new dimension to the debate. The reply, too, is objectively bad in the same sense as his book. But it adds a level of delusion that in reading it made my jaw drop. I’m not writing this for rhetorical effect. In the reply, he lets loose with two whoppers that make me question what planet he’s been living on. Indeed, I have to seriously wonder about the degree to which Darwinists are in their right minds if they find in Rosenhouse a voice that speaks for them.
But before getting to the two whoppers, buried in his reply are two substantive points worth addressing. They came up in my review, received comment in the reply, and deserve some additional comment here. They concern (1) the connection between irreducible and specified complexity and (2) the role of the environment in supplying information to the Darwinian process
Irreducible versus Specified Complexity
I addressed this point in my review, but let’s have another go at it. Consider Sisyphus. As long as you can remember, he’s been rolling a rock up a hill, only to have it roll back down before it gets to the very top, which, let’s assume, is a stable equilibrium, so if he gets it to the very top, it will stay there (though he never does). What is the probability that Sisyphus will get the rock up to the very top? As a historical or inductive probability, it is quite low. All your life, you have been seeing him try to get the rock up there and somehow it never quite gets there That historical probability for Sisyphus is the same type of probability as inherent in Mike Behe’s assessment of Darwinian processes being unable to build irreducibly complex molecular machines. All the attempts by biologists to trace a detailed Darwinian pathway of how an irreducibly complex system might emerge from an evolutionary precursor performing a different function have failed.
Richard Lenski, for instance, has run tens of thousands of generations of E. coli, and produced no novel irreducibly complex system. The record of failure of evolutionary biologists in their inability to provide detailed Darwinian pathways for irreducibly complex systems is as complete as Sisyphus’s efforts to get the rock to the top of the hill. If you disagree, please provide an irreducibly complex system, its precursor system performing a different primary function, and then the step-by-step path of how to get from one to the other. Silence? Crickets?
The Nuts and Bolts
By contrast, specified complexity gets at the nuts and bolts of the probabilistic hurdles that render an evolutionary transition intractable. To continue with the Sisyphus analogy, specified complexity would look not at Sisyphus’s record of failure so much as the types of obstacles he faces in getting to the top and how those might render getting to the top improbable.
For instance, perhaps in rolling the rock up the hill, most of the path is clear and unproblematic, but at one point there’s a bump so that given his strength he just can’t get over the bump. Or perhaps, there are multiple bumps, where he’s got a positive probability of getting over each bump, but when all these probabilities get combined, he’s bound not to get over all the bumps. Or perhaps he gets tired, running out of steam as he moves up the hill, so that bumps lower on the hill would be no problem, but by the time he gets up the hill, they do become a problem, and his probability of getting over all of them approaches zero.
The point to appreciate is that such a probability analysis of Sisyphus adds to our understanding of his failure. His record of failure is enough to justify assigning a low historical probability to his being able to roll the rock to the very top of the hill. But an empirically based probability of his failure needs to look at the particularities of the probabilistic hurdles that he’s facing. The same holds for irreducible complexity. There’s a long record of failure by biologists to explain how these systems might evolve. Specified complexity attempts to understand the probabilistic particulars that could explain the record of failure.
But specified complexity is not merely a supplement to irreducible complexity. Not all biological systems are irreducibly complex. In consequence, specified complexity can assess the evolvability of biological systems that are not irreducibly complex. For instance, the beta-lactamase enzymatic system that Doug Axe examined (described at greater length in my review) is not in any clear sense irreducibly complex, but it is analyzable probabilistically and exhibits specified complexity.
Consider a Bridge
One more analogy to try to nail all this down. Again, I write for the unwashed middle and have no expectation of assuaging Rosenhouse. Consider a bridge. It’s stood for 100 years, faced all kinds of weather and hardship, and has remained imperturbable. And yet one day it suddenly collapses. Before its collapse, we might think that its probability of continuing to stand was quite high, and so the probability of collapse was quite low. Given its collapse, is it therefore safe to say that a highly improbable event happened?
Those versed in the use specified complexity as a tool for disentangling the probabilities underlying various systems would say that such historical probabilities are of little interest now that the bridge has collapsed. Rather, we need engineers to examine the wreckage to see if there were any tell-tale signs of weaknesses in the bridge that would increase its probability of collapse. The probabilities in this case would be empirical and structural rather than historical. Specified complexity substitutes actionable empirically and structurally based probabilities for historical probabilities
No comments:
Post a Comment