Search This Blog

Wednesday 29 June 2022

The struggle for the "empire of God"?

 <iframe width="1019" height="573" src="https://www.youtube.com/embed/vFxJLzzZHvM" title="1683 The Siege of Vienna" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

Nothing in biology is as complex as Darwinism's relationship with the truth?

 Jason Rosenhouse and Specified Complexity

William A. Dembski

I am reviewing Jason Rosenhouse’s new book, The Failures of Mathematical Anti-Evolutionism (Cambridge University Press), serially. For the full series so far, go here.


The method for inferring design laid out in my book The Design Inference amounts to determining whether an event, object, or structure exhibits specified complexity or, equivalently, specified improbability. The term specified complexity does not actually appear in The Design Inference, where the focus is on specified improbability. Specified improbability identifies things that are improbable but also suitably patterned, or specified. Specified complexity and specified improbability are the same notion. 


To see the connection between the two terms, imagine tossing a fair coin. If you toss it thirty times, you’ll witness an event of probability 1 in 2^30, or roughly 1 in a billion. At the same time, if you record those coin tosses as bits (0 for tails, 1 for heads), that will require 30 bits. The improbability of 1 in 2^30 thus corresponds precisely to the number of bits required to identify the event. The greater the improbability, the greater the complexity. Specification then refers to the right sort of pattern that, in the presence of improbability, eliminates chance. 


An Arrow Shot at a Target

Not all patterns eliminate chance in the presence of improbability. Take an arrow shot at a target. Let’s say the target has a bullseye. If the target is fixed and the arrow is shot at it, and if the bullseye is sufficiently small so that hitting it with the arrow is extremely improbable, then chance may rightly be eliminated as an explanation for the arrow hitting the bullseye. On the other hand, if the arrow is shot at a large wall, where the probability of hitting the wall is large, and the target is then painted around the arrow sticking in the wall so that the arrow is squarely in the bullseye, then no conclusion about whether the arrow was or was not shot by chance is possible. 


Specified improbability, or specified complexity, calls on a number of interrelated concepts. Besides a way of calculating or estimating probability and a criterion for determining whether a pattern is indeed a specification, the notion requires factoring in the number of relevant events that could occur, or what are called probabilistic resources. For example, multiple arrows allowing multiple shots will make it easier to hit the bullseye by chance. Moreover, the notion requires having a coherent rationale for determining what probability bounds may legitimately be counted as small enough to eliminate chance. Also, there’s the question of factoring in other specifications that may compete with the one originally identified, such as having two fixed targets on a wall and trying to determine whether chance could be ruled out if either of them were hit with an arrow. 


The basic theory for explaining how specified improbability/complexity is appropriately used to infer design was laid out in The Design Inference, and then refined (in some ways simplified, in some ways extended) over time. The notion was well vetted. It was the basis for my doctoral dissertation in the philosophy of science and the foundations of probability theory — this dissertation was turned into The Design Inference. I did this work in philosophy after I had already done a doctoral dissertation in mathematics focusing on probability and chaos theory (Leo Kadanoff and Patrick Billingsley were the advisors on that dissertation). 


The manuscript for The Design inference went past a stringent review with academic editors at Cambridge University Press, headed by Brian Skyrms, a philosopher of probability at UC Irvine, and one of the few philosophers to be in the National Academy of Sciences. When I was a postdoc at Notre Dame in 1996–97, the philosopher Phil Quinn revealed to me that he had been a reviewer, giving Cambridge an enthusiastic thumbs up. He also told me that he had especially liked The Design Inference’s treatment of complexity theory (chapter four in the book). 


But There’s More

With my colleagues Winston Ewert and Robert Marks, we’ve given specified complexity a rigorous formulation in terms of Kolmogorov complexity/algorithmic information theory:


Winston Ewert, William Dembski, and Robert J. Marks II (2014). “Algorithmic Specified Complexity.” In J. Bartlett, D. Hemser, J. Hall, eds., Engineering and the Ultimate: An Interdisciplinary Investigation of Order and Design in Nature and Craft (Broken Arrow, Okla.: Blyth Institute Press).

Ewert, W., Dembski, W., & Marks, R. J. (2015). “Algorithmic Specified Complexity in the Game of Life.” IEEE Transactions on Systems, Man, and Cybernetics: Systems, 45(4), 584–594.

True to form, critics of the concept refuse to acknowledge that specified complexity is a legitimate well-defined concept. Go to the Wikipedia entry on specified complexity, and you’ll find the notion dismissed as utterly bogus. Publications on specified complexity by colleagues and me, like those just listed, are ignored and left uncited. Rosenhouse is complicit in such efforts to discredit specified complexity. 


But consider, scientists must calculate, or at least estimate, probability all the time, and that’s true even of evolutionary biologists. For instance, John Maynard Smith, back in his 1958 The Theory of Evolution, concludes that flatworms, annelids, and molluscs, representing three different phyla, must nonetheless descend from a common ancestor because their common cleavage pattern in early development “seems unlikely to have arisen independently more than once.” (Smith, pp. 265–266) “Unlikely” is, of course, a synonym for “improbable.” 


Improbability by itself, however, is not enough. The events to which we assign probabilities need to be identified, and that means they must match identifiable patterns (in the Smith example, it’s the common cleavage pattern that he identified). Events exhibiting no identifiable pattern are events over which we can exercise no scientific insight and about which we can draw no scientific conclusion.


Hung Up on Specification

Even so, Rosenhouse seems especially hung up on my notion of specification, which he mistakenly defines as “independently describable” (p. 133) or “describable without any reference to the object itself” (p. 141). But nowhere does he give the actual definition of specification. To motivate our understanding of specification, I’ve used such language as “independently given” or “independently identifiable.” But these are intuitive ways of setting out the concept. Specification has a precise technical definition, of which Rosenhouse seems oblivious.


In The Design Inference, I characterized specification precisely in terms of a complexity measure that “estimates the difficulty of formulating patterns.” This measure then needs to work in tandem with a complexity bound that “fixes the level of complexity at which formulating such patterns is feasible.” (TDI, p. 144) That was in 1998. By 2005, this core idea stayed unchanged, but I preferred to use the language of descriptive complexity and minimum description length to characterize specification (see my 2005 article on Specification, published in Philosophia Christi, which Rosenhouse cites but without, again, giving the actual definition of the term specification). 


Two Notions of Complexity

So, what’s the upshot of specification according to this definition? Essentially, specified complexity or specified improbability involves two notions of complexity, one probabilistic, the other linguistic or descriptive. Thus we can speak of probabilistic complexity and descriptive complexity. Events become probabilistically more complex as they become more improbable (this is consistent with, as pointed out earlier, longer, more improbable sequences of coin tosses requiring longer bit strings to be recorded). At the same time, descriptive complexity characterizes patterns that describe events via a descriptive language. Descriptive complexity differs from probabilistic complexity and denotes the shortest description that will describe an event. The specification in specified complexity thus refers to patterns with short descriptions, and specified complexity refers to events that have high probabilistic complexity but whose identifying patterns have low descriptive complexity. 


To appreciate how probabilistic and descriptive complexity play off each other in specified complexity, consider the following example from poker. Take the hands corresponding to “royal flush” and “any hand.” These descriptions are roughly the same length and very short. Yet “royal flush” refers to 4 hands among 2,598,960 total number of poker hands and thus describes an event of probability 4/2,598,960 = 1/649,740. “Any hand,” by contrast, allows for any of the total number of 2,598,960 poker hands, and thus describes an event of probability 1. Clearly, if we witnessed a royal flush, we’d be inclined, on the basis of its short description and the low probability event to which it corresponds, to refuse to attribute it to chance. Now granted, with all the poker that’s played worldwide, the probability of 1/649,740 is not small enough to decisively rule out its chance occurrence (in the history of poker, royal flushes have appeared by chance). But certainly we’d be less inclined to ascribe a royal flush to chance than we would any hand at all.


The general principle illustrated in this example is that large probabilistic complexity (or low probability) and small descriptive complexity combine to yield specified complexity. Specifications are then those patterns that have small descriptive complexity. Note that it can be computationally intractable to calculate minimum description length exactly, but that often we can produce an effective estimate for it by finding a short description, which, by definition, will then constitute an upper bound for the absolute minimum. As it is, actual measures of specified complexity take the form of a negative logarithm applied to the product of a descriptive complexity measure times a probability. Because a negative logarithm makes small things big and big things small, high specified complexity corresponds to small probability multiplied with small descriptive complexity. This is how I find it easiest to keep straight how to measure specified complexity. 


Rosenhouse, however, gives no evidence of grasping specification or specified complexity in his book (pp. 137–146). For instance, he will reject that the flagellum is specified, claiming that it is not “describable without any reference to the object itself,” as though that were the definition of specification. (See also p. 161.) Ultimately, it’s not a question of independent describability, but of short or low-complexity describability. I happen to think that the description “bidirectional motor-driven propeller” is an independent way of describing the flagellum because humans invented bidirectional motor-driven propellers before they found them, in the form of flagella, on the backs of E. coli and other bacteria (if something has been independently identified, then it is independently identifiable). But what specifies it is that it has a short description, not that the description could or could not be identified independently of the flagellum. By contrast, a random assortment of the protein subunits that make up the flagellum would be much harder to describe. The random assortment would therefore require a much longer description, and would thus not be specified. 


The Science Literature

The mathematical, linguistic, and computer science literature is replete with complexity measures that use description length, although the specific terminology to characterize such measures varies with field of inquiry. For instance, the abbreviation MDL, or minimum description length, has wide currency; it arises in information theory and merits its own Wikipedia entry. Likewise AIT, or algorithmic information theory, has wide currency, where the focus is on compressibility of computer programs, so that highly compressible programs are the ones with shorter descriptions. In any case, specification and specified complexity are well defined mathematical notions. Moreover, the case for specified complexity strongly implicating design when probabilistic complexity is high and descriptive complexity is low is solid. I’m happy to dispute these ideas with anyone. But in such a dispute, it will have to be these actual ideas that are under dispute. Rosenhouse, by contrast, is unengaged with these actual ideas, attributing to me a design inferential apparatus that I do not recognize, and then offering a refutation of it that is misleading and irrelevant. 


As a practical matter, it’s worth noting that most Darwinian thinkers, when confronted with the claim that various biological systems exhibit specified complexity, don’t challenge that the systems in question (like the flagellum) are specified (Dawkins in The Blind Watchmaker, for instance, never challenges specification). In fact, they are typically happy to grant that these systems are specified. The reason they give for not feeling the force of specified complexity in triggering a design inference is that, as far as they’re concerned, the probabilities aren’t small enough. And that’s because natural selection is supposed to wash away any nagging improbabilities. 


A Coin-Tossing Analogy

In a companion essay to his book for Skeptical Inquirer, Rosenhouse offers the following coin-tossing analogy to illustrate the power of Darwinian processes in overcoming apparent improbabilities:


[Creationists argue that] genes and proteins evolve through a process analogous to tossing a coin multiple times. This is untrue because there is nothing analogous to natural selection when you are tossing coins. Natural selection is a non-random process, and this fundamentally affects the probability of evolving a particular gene. To see why, suppose we toss 100 coins in the hopes of obtaining 100 heads. One approach is to throw all 100 coins at once, repeatedly, until all 100 happen to land heads at the same time. Of course, this is exceedingly unlikely to occur. An alternative approach is to flip all 100 coins, leave the ones that landed heads as they are, and then toss again only those that landed tails. We continue in this manner until all 100 coins show heads, which, under this procedure, will happen before too long. 


The latter approach to coin tossing, which retosses only the coins that landed tails, corresponds, for Rosenhouse, to Darwinian natural selection making probable for evolution what at first blush might seem improbable. Of course, the real issue here is to form reliable estimates of what the actual probabilities are even when natural selection is thrown into the mix. The work of Mike Behe and Doug Axe argues that for some biological systems (such as molecular machines and individual enzymes), natural selection does nothing to mitigate what, without it, are vast improbabilities. Some improbabilities remain extreme despite natural selection. 


One final note before leaving specification and specified complexity. Rosenhouse suggests that in defining specified complexity as I did, I took a pre-theoretic notion as developed by origin-of-life researcher Leslie Orgel, Paul Davies, and others, and then “claim[ed] to have developed a mathematically rigorous form of the concept.” In other words, he suggests that I took a track 1 notion and claimed to turn it into a track 2 notion. Most of the time, Rosenhouse gives the impression that moving mathematical ideas from track 1 to track 2 is a good thing. But not in this case. Instead, Rosenhouse faults me for claiming that “this work constitutes a genuine contribution to science, and that [ID proponents] can use [this] work to prove that organisms are the result of intelligent design.” For Rosenhouse, “It is these claims that are problematic, to put it politely, for reasons we have already discussed.” (p. 161) 


The irony here is rich. Politeness aside, Rosenhouse’s critique of specified complexity is off the mark because he has mischaracterized its central concept, namely, specification. But what makes this passage particularly cringeworthy is that Leslie Orgel, Paul Davies, Francis Crick, and Richard Dawkins have all enthusiastically endorsed specified complexity, in one form or another, sometimes using the very term, at other times using the terms complexity and specification (or specificity) in the same breath. All of them have stressed the centrality of this concept for biology and, in particular, for understanding biological origins. 


Yet according to Rosenhouse, “These authors were all using ‘specified complexity’ in a track one sense. As a casual saying that living things are not just complex, but also embody independently-specifiable patterns, there is nothing wrong with the concept.” (p. 161) But in fact, there’s plenty wrong if this concept must forever remain merely at a pre-theoretic, or track 1, level. That’s because those who introduced the term “specified complexity” imply that the underlying concept can do a lot of heavy lifting in biology, getting at the heart of biological innovation and origins. So, if specified complexity stays forcibly confined to a pre-theoretic, or track 1, level, it becomes a stillborn concept — suggestive but ultimately fruitless. Yet given its apparent importance, the concept calls for a theoretic, or track 2, level of meaning and development. According to Rosenhouse, however, track 2 has no place for the concept. What a bizarre, unscientific attitude. 


Consider Davies from The Fifth Miracle (1999, p. 112): “Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity.” Or consider Richard Dawkins in The Blind Watchmaker (1986, pp. 15–16): “We were looking for a precise way to express what we mean when we refer to something as complicated. We were trying to put a finger on what it is that humans and moles and earthworms and airliners and watches have in common with each other, but not with blancmange, or Mont Blanc, or the moon. The answer we have arrived at is that complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by random chance alone.” How can any scientist who takes such remarks seriously be content to leave specified complexity at a track 1 level?


You’re Welcome, Rosenhouse

Frankly, Rosenhouse should thank me for taking specified complexity from a track 1 concept and putting it on solid footing as a track 2 concept, clarifying what was vague and fuzzy in the pronouncements of Orgel and others about specified complexity, thereby empowering specified complexity to become a precise tool for scientific inquiry. But I suspect in waiting for such thanks, I would be waiting for the occurrence of a very small probability event. And who in their right mind does that? Well, Darwinists for one. But I’m not a Darwinist.


Next, “Evolution With and Without Multiple Simultaneous Changes.’”


Editor’s note: This review is cross-posted with permission of the author from BillDembski.com.

Yet more on what 'unbelievers' need to believe.

 More on Self-Replicating Machines

Granville Sewell


In a post earlier this month, I outlined “Three Realities Chance Can’t Explain That Intelligent Design Can.” The post showed some of the problems with materialist explanations for how the four fundamental, unintelligent forces of physics alone could have rearranged the fundamental particles of physics on Earth into computers and science texts and smart phones. I drew a comparison to self-replicating machines:


[I]magine that we did somehow manage to design, say, a fleet of cars with fully automated car-building factories inside, able to produce new cars — and not just normal new cars, but new cars with fully automated car-building factories inside them. Who could seriously believe that if we left these cars alone for a long time, the accumulation of duplication errors made as they reproduced themselves would result in anything other than devolution, and eventually could even be organized by selective forces into more advanced automobile models?


A More Careful Look

But I don’t think this makes sufficiently clear what a difficult task it would be to create truly self-replicating cars. So let’s look at this more carefully. We know how to build a simple Ford Model T car. Now let’s build a factory inside this car, so that it can produce Model T cars automatically. We’ll call the new car, with the Model T factory inside, a “Model U.” A car with an entire automobile factory inside, which never requires any human intervention, is far beyond our current technology, but it doesn’t seem impossible that future generations might be able to build a Model U. 


Of course, the Model U cars are not self-replicators, because they can only construct simple Model T’s. So let’s add more technology to this car so that it can build Model U’s, that is, Model T’s with car-building factories inside. This new “Model V” car, with a fully automated factory inside capable of producing Model U’s (which are themselves far beyond our current technology), would be unthinkably complex. But is this new Model V now a self-replicator? No, because it only builds the much simpler Model U. The Model V species will become extinct after two generations, because their children will be Model U’s, and their grandchildren will be infertile Model T’s! 


So Back to Work 

Each time we add technology to this car, to move it closer to the goal of reproduction, we only move the goalposts, because now we have a more complicated car to reproduce. It seems that the new models would grow exponentially in complexity, and one begins to wonder if it is even theoretically possible to create self-replicating machines. Yet we see such machines all around us in the living world. You and I are two examples. And here we have ignored the very difficult question of where these cars get the metals and rubber and other raw materials they need to supply their factories.


Of course, materialists will say that evolution didn’t create advanced self-replicating machines directly. Instead, it only took a first simple self-replicator and gradually evolved it into more and more advanced self-replicators. But beside the fact that human engineers still have no idea how to create any “simple” self-replicating machine, the point is, evolutionists are attributing to natural causes the ability to create things much more advanced than self-replicating cars (for example, self-replicating humans), which seem impossible, or virtually impossible, to design. I conceded in my earlier post (and in my video “A Summary of the Evidence for Intelligent Design”) that human engineers might someday construct a self-replicating machine. But even if they do, that will not show that life could have arisen through natural processes. It will only have shown that it could have arisen through design. 


Design by Duplication Errors

Anyway, as I wrote there, even if we could create self-replicating cars, who could seriously believe that the duplication errors made as they reproduced themselves could ever lead to major advances? (And even intelligent, conscious machines eventually.) Surely an unimaginably complex machine like a self-replicating car could only be damaged by such errors, even when filtered through natural selection. We are so used to seeing animals and plants reproduce themselves with minimal degradation from generation to generation that we don’t realize how astonishing this really is. We really have no idea how living things are able to pass their current complex structures on to their descendants, much less how they could evolve even more complex structures.


When mathematicians have a simple, clear proof of a theorem, and a long, complicated counterargument, full of unproven assumptions and questionable arguments, we accept the simple proof, even before we find the errors in the complicated counterargument. The argument for intelligent design could not be simpler or clearer: unintelligent forces alone cannot rearrange atoms into computers and airplanes and nuclear power plants and smart phones, and any attempt to explain how they can must fail somewhere because they obviously can’t. Since many scientists are not impressed by such simple arguments, my post was an attempt to point out some of the errors in the materialist’s three-step explanation for how they could. And to say that all three steps are full of unproven assumptions and questionable arguments is quite an understatement. 


At the least, it should now be clear that while science may be able to explain everything that has happened on other planets by appealing only to the unintelligent forces of nature, trying to explain the origin and evolution of life on Earth is a much more difficult problem, and intelligent design should at least be counted among the views that are allowed to be heard. Indeed, this is already starting to happen. 

Yet another strawman bully?

 Jason Rosenhouse and “Mathematical Proof”

William A. Dembski


I am reviewing Jason Rosenhouse’s new book, The Failures of Mathematical Anti-Evolutionism (Cambridge University Press), serially. For the full series so far, go here.


A common rhetorical ploy is to overstate an opponent’s position so much that it becomes untenable and even ridiculous. Jason Rosenhouse deploys this tactic repeatedly throughout his book. Design theorists, for instance, argue that there’s good evidence to think that the bacterial flagellum is designed, and they see mathematics as relevant to making such an evidential case. Yet with reference to the flagellum, Rosenhouse writes, “Anti-evolutionists make bold, sweeping claims that some complex system [here, the flagellum] could not have arisen through evolution. They tell the world they have conclusive mathematical proof of this.” (p. 152) I am among those who have made a mathematical argument for the design of the flagellum. And so, Rosenhouse levels that charge specifically against me: “Dembski claims his methods allow him to prove mathematically that evolution has been refuted …” (p. 136)


Rosenhouse, as a mathematician, must at some level realize that he’s prevaricating. It’s one thing to use mathematics in an argument. It’s quite another to say that one is offering a mathematical proof. The latter is much, much stronger than the former, and Rosenhouse knows the difference. I’ve never said that I’m offering a mathematical proof that systems like the flagellum are designed. Mathematical proofs leave no room for fallibility or error. Intelligent design arguments use mathematics, but like all empirical arguments they fall short of the deductive certainty of mathematical proof. I can prove mathematically that 6 is a composite number by pointing to 2 and 3 as factors. I can prove mathematically that 7 is a prime number by running through all the numbers greater than 1 and less than 7, showing that none of them divide it. But no mathematical proof that the flagellum is designed exists, and no design theorist that I know has ever suggested otherwise.


Rosenhouse’s Agenda

So, how did Rosenhouse arrive at the conclusion that I’m offering a mathematical proof of the flagellum’s design? I suspect the problem is Rosenhouse’s agenda, which is to discredit my work on intelligent design irrespective of its merit. Rosenhouse has no incentive to read my work carefully or to portray it accurately. For instance, he seizes on a probabilistic argument that I make for the flagellum’s design in my 2002 book No Free Lunch, characterizing it as a mathematical proof, and a failed one at that. But he has no possible justification for calling what I do there a mathematical proof. Note how I wrap up that argument — the very language used is as far from a mathematical proof as one can find (and I’ve proved my share of mathematical theorems, so I know):


Although it may seem as though I have cooked these numbers, in fact I have tried to be conservative with all my estimates. To be sure, there is plenty of biological work here to be done. The big challenge is to firm up these numbers and make sure they do not cheat in anybody’s favor. Getting solid, well-confirmed estimates for perturbation tolerance and perturbation identity factors [used to estimate probabilities gauging evolvability] will require careful scientific investigation. Such estimates, however, are not intractable. Perturbation tolerance factors can be assessed empirically by random substitution experiments where one, two, or a few substitutions are made. 


NO FREE LUNCH, PP. 301–302

Obviously, I’ve used mathematics here to make an argument. But equally obviously, I’m not claiming to have provided a mathematical proof. In the section where this quote appears, I’m laying out various mathematical and probabilistic techniques that can be used to make an evidential case for the flagellum’s design. It’s not a mathematical proof but an evidential argument, and not even a full-fledged evidential argument so much as a template for such an argument. In other words, I’m laying out what such an argument would look like if one filled in the biological and probabilistic details. 


All or Nothing

As such, the argument falls short of deductive certainty. Mathematical proof is all or nothing. Evidential support comes in degrees. The point of evidential arguments is to increase the degree of support for a claim, in this case for the claim that the flagellum is intelligently designed. A dispassionate reader would regard my conclusion here as measured and modest. Rosenhouse’s refutation, by contrast, is to set up a strawman, so overstating the argument that it can’t have any merit.


The reference to perturbation tolerance and perturbation identity factors here refers to the types of neighborhoods that are relevant to evolutionary pathways. Such neighborhoods and pathways were the subject of the two previous posts in this review series. These perturbation factors are probabilistic tools for investigating the evolvability of systems like the flagellum. They presuppose some technical sophistication, but their point is to try honestly to come to terms with the probabilities that are actually involved with real biological systems. 


At this point, Rosenhouse might feign shock, suggesting that I give the impression of presenting a bulletproof argument for the design of the flagellum, but that I’m now backpedaling, only to admit that the probabilistic evidence for the design of the flagellum is tentative. But here’s what’s actually happening. Mike Behe, in defining irreducible complexity, has identified a class of biological systems (those that are irreducibly complex) that resist Darwinian explanations and that implicate design. At the same time, there’s also this method for inferring design developed by Dembski. What happens if that method is applied to irreducibly complex systems? Can it infer design for such systems? That’s the question I’m trying to answer, and specifically for the flagellum.


Begging the Question?

Since the design inference, as a method, infers design by identifying what’s called specified complexity (more on this is coming up), Rosenhouse claims that my argument begs the question. Thus, I’m supposed to be presupposing that irreducible complexity makes it impossible for a system to evolve by Darwinian means. And from there I’m supposed to conclude that it must be highly improbable that it could evolve by Darwinian means (if it’s impossible, then it’s improbable). But that’s not what I’m doing. Instead, I’m using irreducible complexity as a signpost of where to look for biological improbability. Specifically, I’m using particular features of an irreducibly complex system like the bacterial flagellum to estimate probabilities related to its evolvability. I conclude, in the case of the flagellum, that those probabilities seem low and warrant a design inference. 


Now I might be wrong (that’s why I say the numbers need to be firmed up and we need to make sure no one is cheating). To this day, I’m not totally happy with the actual numbers in the probability calculation for the bacterial flagellum as presented in my book No Free Lunch. But that’s no reason for Rosenhouse and his fellow Darwinists to celebrate. The fact is that they have no probability estimates at all for the evolution of these systems. Worse yet, because they are so convinced that these systems evolved by Darwinian means, they know in advance, simply from their armchairs, that the probabilities must be high. The point of that section in No Free Lunch was less to do a definitive calculation for the flagellum as to lay out the techniques for calculating probabilities in such cases (such as the perturbation probabilities). 


In his book, Rosenhouse claims that I have “only once tried to apply [my] method to an actual biological system” (p. 137), that being to the flagellum in No Free Lunch. And, obviously, he thinks I failed in that regard. But as it is, I have applied the method elsewhere, and with more convincing numbers. See, for instance, my analysis of Doug Axe’s investigation into the evolvability of enzyme folds in my 2008 book The Design of Life (co-authored with Jonathan Wells; see chapter seven). My design inferential method yields much firmer conclusions there than for the flagellum for two reasons: (1) the numbers come from the biology as calculated by biologists (in this case, the biologist is Axe), and (2) the systems in question (small enzymatic proteins with 150 or so amino acids) are much easier to analyze than big molecular machines like the flagellum, which have tens of thousands of protein subunits. 


Hiding Behind Complexities

Darwinists have always hidden behind the complexities of biological systems. Instead of coming to terms with the complexities, they turn the tables and say: “Prove us wrong and show that these systems didn’t evolve by Darwinian means.” As always, they assume no burden of proof. Given the slipperiness of the Darwinian mechanism, in which all interesting evolution happens by co-option and coevolution, where structures and functions must both change in concert and crucial evolutionary intermediates never quite get explicitly identified, Darwinists have essentially insulated their theory from challenge. So the trick for design theorists looking to apply the design inferential method to actual biological systems is to find a Goldilocks zone in which a system is complex enough to yield design if the probabilities can be calculated and yet simple enough for the probabilities actually to be calculated. Doug Axe’s work is, in my view, the best in this respect. We’ll return to it since Axe also comes in for criticism from Rosenhouse.


Next, “Jason Rosenhouse and Specified Complexity.”


Editor’s note: This review is cross-posted with permission of the author from BillDembski.com.

Peace: JEHOVAH'S gift to his people.

Malachi3:18NIV"And you will again see the distinction between the righteous and the wicked, between those who serve God and those who do not."

1John3:10NIV"This is how we know who the children of God are and who the children of the devil are: Anyone who does not do what is right is not God’s child, nor is anyone who does not love their brother and sister."

Micah4:1-3ASV"But in the latter days it shall come to pass, that the mountain of Jehovah's house shall be established on the top of the mountains, and it shall be exalted above the hills; and peoples shall flow unto it.


2And many nations shall go and say, Come ye, and let us go up to the mountain of Jehovah, and to the house of the God of Jacob; and he will teach us of his ways, and we will walk in his paths. For out of Zion shall go forth the law, and the word of Jehovah from Jerusalem;


3and he will judge between many peoples, and will decide concerning strong nations afar off: and they shall beat their swords into plowshares, and their spears into pruning-hooks; nation shall not lift up sword against nation, neither shall they learn war any more."

Peace is the metric by which a distinction is to be made not merely between the individual who has truly dedicated himself to JEHOVAH'S service and the one whose profession of such a dedication is questionable. But the people who are truly in a covenant relationship with the one true God and the churches whose profession of such a relationship cannot withstand unbiased scrutiny.