Search This Blog

Wednesday, 29 June 2022

Nothing in biology is as complex as Darwinism's relationship with the truth?

 Jason Rosenhouse and Specified Complexity

William A. Dembski

I am reviewing Jason Rosenhouse’s new book, The Failures of Mathematical Anti-Evolutionism (Cambridge University Press), serially. For the full series so far, go here.


The method for inferring design laid out in my book The Design Inference amounts to determining whether an event, object, or structure exhibits specified complexity or, equivalently, specified improbability. The term specified complexity does not actually appear in The Design Inference, where the focus is on specified improbability. Specified improbability identifies things that are improbable but also suitably patterned, or specified. Specified complexity and specified improbability are the same notion. 


To see the connection between the two terms, imagine tossing a fair coin. If you toss it thirty times, you’ll witness an event of probability 1 in 2^30, or roughly 1 in a billion. At the same time, if you record those coin tosses as bits (0 for tails, 1 for heads), that will require 30 bits. The improbability of 1 in 2^30 thus corresponds precisely to the number of bits required to identify the event. The greater the improbability, the greater the complexity. Specification then refers to the right sort of pattern that, in the presence of improbability, eliminates chance. 


An Arrow Shot at a Target

Not all patterns eliminate chance in the presence of improbability. Take an arrow shot at a target. Let’s say the target has a bullseye. If the target is fixed and the arrow is shot at it, and if the bullseye is sufficiently small so that hitting it with the arrow is extremely improbable, then chance may rightly be eliminated as an explanation for the arrow hitting the bullseye. On the other hand, if the arrow is shot at a large wall, where the probability of hitting the wall is large, and the target is then painted around the arrow sticking in the wall so that the arrow is squarely in the bullseye, then no conclusion about whether the arrow was or was not shot by chance is possible. 


Specified improbability, or specified complexity, calls on a number of interrelated concepts. Besides a way of calculating or estimating probability and a criterion for determining whether a pattern is indeed a specification, the notion requires factoring in the number of relevant events that could occur, or what are called probabilistic resources. For example, multiple arrows allowing multiple shots will make it easier to hit the bullseye by chance. Moreover, the notion requires having a coherent rationale for determining what probability bounds may legitimately be counted as small enough to eliminate chance. Also, there’s the question of factoring in other specifications that may compete with the one originally identified, such as having two fixed targets on a wall and trying to determine whether chance could be ruled out if either of them were hit with an arrow. 


The basic theory for explaining how specified improbability/complexity is appropriately used to infer design was laid out in The Design Inference, and then refined (in some ways simplified, in some ways extended) over time. The notion was well vetted. It was the basis for my doctoral dissertation in the philosophy of science and the foundations of probability theory — this dissertation was turned into The Design Inference. I did this work in philosophy after I had already done a doctoral dissertation in mathematics focusing on probability and chaos theory (Leo Kadanoff and Patrick Billingsley were the advisors on that dissertation). 


The manuscript for The Design inference went past a stringent review with academic editors at Cambridge University Press, headed by Brian Skyrms, a philosopher of probability at UC Irvine, and one of the few philosophers to be in the National Academy of Sciences. When I was a postdoc at Notre Dame in 1996–97, the philosopher Phil Quinn revealed to me that he had been a reviewer, giving Cambridge an enthusiastic thumbs up. He also told me that he had especially liked The Design Inference’s treatment of complexity theory (chapter four in the book). 


But There’s More

With my colleagues Winston Ewert and Robert Marks, we’ve given specified complexity a rigorous formulation in terms of Kolmogorov complexity/algorithmic information theory:


Winston Ewert, William Dembski, and Robert J. Marks II (2014). “Algorithmic Specified Complexity.” In J. Bartlett, D. Hemser, J. Hall, eds., Engineering and the Ultimate: An Interdisciplinary Investigation of Order and Design in Nature and Craft (Broken Arrow, Okla.: Blyth Institute Press).

Ewert, W., Dembski, W., & Marks, R. J. (2015). “Algorithmic Specified Complexity in the Game of Life.” IEEE Transactions on Systems, Man, and Cybernetics: Systems, 45(4), 584–594.

True to form, critics of the concept refuse to acknowledge that specified complexity is a legitimate well-defined concept. Go to the Wikipedia entry on specified complexity, and you’ll find the notion dismissed as utterly bogus. Publications on specified complexity by colleagues and me, like those just listed, are ignored and left uncited. Rosenhouse is complicit in such efforts to discredit specified complexity. 


But consider, scientists must calculate, or at least estimate, probability all the time, and that’s true even of evolutionary biologists. For instance, John Maynard Smith, back in his 1958 The Theory of Evolution, concludes that flatworms, annelids, and molluscs, representing three different phyla, must nonetheless descend from a common ancestor because their common cleavage pattern in early development “seems unlikely to have arisen independently more than once.” (Smith, pp. 265–266) “Unlikely” is, of course, a synonym for “improbable.” 


Improbability by itself, however, is not enough. The events to which we assign probabilities need to be identified, and that means they must match identifiable patterns (in the Smith example, it’s the common cleavage pattern that he identified). Events exhibiting no identifiable pattern are events over which we can exercise no scientific insight and about which we can draw no scientific conclusion.


Hung Up on Specification

Even so, Rosenhouse seems especially hung up on my notion of specification, which he mistakenly defines as “independently describable” (p. 133) or “describable without any reference to the object itself” (p. 141). But nowhere does he give the actual definition of specification. To motivate our understanding of specification, I’ve used such language as “independently given” or “independently identifiable.” But these are intuitive ways of setting out the concept. Specification has a precise technical definition, of which Rosenhouse seems oblivious.


In The Design Inference, I characterized specification precisely in terms of a complexity measure that “estimates the difficulty of formulating patterns.” This measure then needs to work in tandem with a complexity bound that “fixes the level of complexity at which formulating such patterns is feasible.” (TDI, p. 144) That was in 1998. By 2005, this core idea stayed unchanged, but I preferred to use the language of descriptive complexity and minimum description length to characterize specification (see my 2005 article on Specification, published in Philosophia Christi, which Rosenhouse cites but without, again, giving the actual definition of the term specification). 


Two Notions of Complexity

So, what’s the upshot of specification according to this definition? Essentially, specified complexity or specified improbability involves two notions of complexity, one probabilistic, the other linguistic or descriptive. Thus we can speak of probabilistic complexity and descriptive complexity. Events become probabilistically more complex as they become more improbable (this is consistent with, as pointed out earlier, longer, more improbable sequences of coin tosses requiring longer bit strings to be recorded). At the same time, descriptive complexity characterizes patterns that describe events via a descriptive language. Descriptive complexity differs from probabilistic complexity and denotes the shortest description that will describe an event. The specification in specified complexity thus refers to patterns with short descriptions, and specified complexity refers to events that have high probabilistic complexity but whose identifying patterns have low descriptive complexity. 


To appreciate how probabilistic and descriptive complexity play off each other in specified complexity, consider the following example from poker. Take the hands corresponding to “royal flush” and “any hand.” These descriptions are roughly the same length and very short. Yet “royal flush” refers to 4 hands among 2,598,960 total number of poker hands and thus describes an event of probability 4/2,598,960 = 1/649,740. “Any hand,” by contrast, allows for any of the total number of 2,598,960 poker hands, and thus describes an event of probability 1. Clearly, if we witnessed a royal flush, we’d be inclined, on the basis of its short description and the low probability event to which it corresponds, to refuse to attribute it to chance. Now granted, with all the poker that’s played worldwide, the probability of 1/649,740 is not small enough to decisively rule out its chance occurrence (in the history of poker, royal flushes have appeared by chance). But certainly we’d be less inclined to ascribe a royal flush to chance than we would any hand at all.


The general principle illustrated in this example is that large probabilistic complexity (or low probability) and small descriptive complexity combine to yield specified complexity. Specifications are then those patterns that have small descriptive complexity. Note that it can be computationally intractable to calculate minimum description length exactly, but that often we can produce an effective estimate for it by finding a short description, which, by definition, will then constitute an upper bound for the absolute minimum. As it is, actual measures of specified complexity take the form of a negative logarithm applied to the product of a descriptive complexity measure times a probability. Because a negative logarithm makes small things big and big things small, high specified complexity corresponds to small probability multiplied with small descriptive complexity. This is how I find it easiest to keep straight how to measure specified complexity. 


Rosenhouse, however, gives no evidence of grasping specification or specified complexity in his book (pp. 137–146). For instance, he will reject that the flagellum is specified, claiming that it is not “describable without any reference to the object itself,” as though that were the definition of specification. (See also p. 161.) Ultimately, it’s not a question of independent describability, but of short or low-complexity describability. I happen to think that the description “bidirectional motor-driven propeller” is an independent way of describing the flagellum because humans invented bidirectional motor-driven propellers before they found them, in the form of flagella, on the backs of E. coli and other bacteria (if something has been independently identified, then it is independently identifiable). But what specifies it is that it has a short description, not that the description could or could not be identified independently of the flagellum. By contrast, a random assortment of the protein subunits that make up the flagellum would be much harder to describe. The random assortment would therefore require a much longer description, and would thus not be specified. 


The Science Literature

The mathematical, linguistic, and computer science literature is replete with complexity measures that use description length, although the specific terminology to characterize such measures varies with field of inquiry. For instance, the abbreviation MDL, or minimum description length, has wide currency; it arises in information theory and merits its own Wikipedia entry. Likewise AIT, or algorithmic information theory, has wide currency, where the focus is on compressibility of computer programs, so that highly compressible programs are the ones with shorter descriptions. In any case, specification and specified complexity are well defined mathematical notions. Moreover, the case for specified complexity strongly implicating design when probabilistic complexity is high and descriptive complexity is low is solid. I’m happy to dispute these ideas with anyone. But in such a dispute, it will have to be these actual ideas that are under dispute. Rosenhouse, by contrast, is unengaged with these actual ideas, attributing to me a design inferential apparatus that I do not recognize, and then offering a refutation of it that is misleading and irrelevant. 


As a practical matter, it’s worth noting that most Darwinian thinkers, when confronted with the claim that various biological systems exhibit specified complexity, don’t challenge that the systems in question (like the flagellum) are specified (Dawkins in The Blind Watchmaker, for instance, never challenges specification). In fact, they are typically happy to grant that these systems are specified. The reason they give for not feeling the force of specified complexity in triggering a design inference is that, as far as they’re concerned, the probabilities aren’t small enough. And that’s because natural selection is supposed to wash away any nagging improbabilities. 


A Coin-Tossing Analogy

In a companion essay to his book for Skeptical Inquirer, Rosenhouse offers the following coin-tossing analogy to illustrate the power of Darwinian processes in overcoming apparent improbabilities:


[Creationists argue that] genes and proteins evolve through a process analogous to tossing a coin multiple times. This is untrue because there is nothing analogous to natural selection when you are tossing coins. Natural selection is a non-random process, and this fundamentally affects the probability of evolving a particular gene. To see why, suppose we toss 100 coins in the hopes of obtaining 100 heads. One approach is to throw all 100 coins at once, repeatedly, until all 100 happen to land heads at the same time. Of course, this is exceedingly unlikely to occur. An alternative approach is to flip all 100 coins, leave the ones that landed heads as they are, and then toss again only those that landed tails. We continue in this manner until all 100 coins show heads, which, under this procedure, will happen before too long. 


The latter approach to coin tossing, which retosses only the coins that landed tails, corresponds, for Rosenhouse, to Darwinian natural selection making probable for evolution what at first blush might seem improbable. Of course, the real issue here is to form reliable estimates of what the actual probabilities are even when natural selection is thrown into the mix. The work of Mike Behe and Doug Axe argues that for some biological systems (such as molecular machines and individual enzymes), natural selection does nothing to mitigate what, without it, are vast improbabilities. Some improbabilities remain extreme despite natural selection. 


One final note before leaving specification and specified complexity. Rosenhouse suggests that in defining specified complexity as I did, I took a pre-theoretic notion as developed by origin-of-life researcher Leslie Orgel, Paul Davies, and others, and then “claim[ed] to have developed a mathematically rigorous form of the concept.” In other words, he suggests that I took a track 1 notion and claimed to turn it into a track 2 notion. Most of the time, Rosenhouse gives the impression that moving mathematical ideas from track 1 to track 2 is a good thing. But not in this case. Instead, Rosenhouse faults me for claiming that “this work constitutes a genuine contribution to science, and that [ID proponents] can use [this] work to prove that organisms are the result of intelligent design.” For Rosenhouse, “It is these claims that are problematic, to put it politely, for reasons we have already discussed.” (p. 161) 


The irony here is rich. Politeness aside, Rosenhouse’s critique of specified complexity is off the mark because he has mischaracterized its central concept, namely, specification. But what makes this passage particularly cringeworthy is that Leslie Orgel, Paul Davies, Francis Crick, and Richard Dawkins have all enthusiastically endorsed specified complexity, in one form or another, sometimes using the very term, at other times using the terms complexity and specification (or specificity) in the same breath. All of them have stressed the centrality of this concept for biology and, in particular, for understanding biological origins. 


Yet according to Rosenhouse, “These authors were all using ‘specified complexity’ in a track one sense. As a casual saying that living things are not just complex, but also embody independently-specifiable patterns, there is nothing wrong with the concept.” (p. 161) But in fact, there’s plenty wrong if this concept must forever remain merely at a pre-theoretic, or track 1, level. That’s because those who introduced the term “specified complexity” imply that the underlying concept can do a lot of heavy lifting in biology, getting at the heart of biological innovation and origins. So, if specified complexity stays forcibly confined to a pre-theoretic, or track 1, level, it becomes a stillborn concept — suggestive but ultimately fruitless. Yet given its apparent importance, the concept calls for a theoretic, or track 2, level of meaning and development. According to Rosenhouse, however, track 2 has no place for the concept. What a bizarre, unscientific attitude. 


Consider Davies from The Fifth Miracle (1999, p. 112): “Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity.” Or consider Richard Dawkins in The Blind Watchmaker (1986, pp. 15–16): “We were looking for a precise way to express what we mean when we refer to something as complicated. We were trying to put a finger on what it is that humans and moles and earthworms and airliners and watches have in common with each other, but not with blancmange, or Mont Blanc, or the moon. The answer we have arrived at is that complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by random chance alone.” How can any scientist who takes such remarks seriously be content to leave specified complexity at a track 1 level?


You’re Welcome, Rosenhouse

Frankly, Rosenhouse should thank me for taking specified complexity from a track 1 concept and putting it on solid footing as a track 2 concept, clarifying what was vague and fuzzy in the pronouncements of Orgel and others about specified complexity, thereby empowering specified complexity to become a precise tool for scientific inquiry. But I suspect in waiting for such thanks, I would be waiting for the occurrence of a very small probability event. And who in their right mind does that? Well, Darwinists for one. But I’m not a Darwinist.


Next, “Evolution With and Without Multiple Simultaneous Changes.’”


Editor’s note: This review is cross-posted with permission of the author from BillDembski.com.

Yet more on what 'unbelievers' need to believe.

 More on Self-Replicating Machines

Granville Sewell


In a post earlier this month, I outlined “Three Realities Chance Can’t Explain That Intelligent Design Can.” The post showed some of the problems with materialist explanations for how the four fundamental, unintelligent forces of physics alone could have rearranged the fundamental particles of physics on Earth into computers and science texts and smart phones. I drew a comparison to self-replicating machines:


[I]magine that we did somehow manage to design, say, a fleet of cars with fully automated car-building factories inside, able to produce new cars — and not just normal new cars, but new cars with fully automated car-building factories inside them. Who could seriously believe that if we left these cars alone for a long time, the accumulation of duplication errors made as they reproduced themselves would result in anything other than devolution, and eventually could even be organized by selective forces into more advanced automobile models?


A More Careful Look

But I don’t think this makes sufficiently clear what a difficult task it would be to create truly self-replicating cars. So let’s look at this more carefully. We know how to build a simple Ford Model T car. Now let’s build a factory inside this car, so that it can produce Model T cars automatically. We’ll call the new car, with the Model T factory inside, a “Model U.” A car with an entire automobile factory inside, which never requires any human intervention, is far beyond our current technology, but it doesn’t seem impossible that future generations might be able to build a Model U. 


Of course, the Model U cars are not self-replicators, because they can only construct simple Model T’s. So let’s add more technology to this car so that it can build Model U’s, that is, Model T’s with car-building factories inside. This new “Model V” car, with a fully automated factory inside capable of producing Model U’s (which are themselves far beyond our current technology), would be unthinkably complex. But is this new Model V now a self-replicator? No, because it only builds the much simpler Model U. The Model V species will become extinct after two generations, because their children will be Model U’s, and their grandchildren will be infertile Model T’s! 


So Back to Work 

Each time we add technology to this car, to move it closer to the goal of reproduction, we only move the goalposts, because now we have a more complicated car to reproduce. It seems that the new models would grow exponentially in complexity, and one begins to wonder if it is even theoretically possible to create self-replicating machines. Yet we see such machines all around us in the living world. You and I are two examples. And here we have ignored the very difficult question of where these cars get the metals and rubber and other raw materials they need to supply their factories.


Of course, materialists will say that evolution didn’t create advanced self-replicating machines directly. Instead, it only took a first simple self-replicator and gradually evolved it into more and more advanced self-replicators. But beside the fact that human engineers still have no idea how to create any “simple” self-replicating machine, the point is, evolutionists are attributing to natural causes the ability to create things much more advanced than self-replicating cars (for example, self-replicating humans), which seem impossible, or virtually impossible, to design. I conceded in my earlier post (and in my video “A Summary of the Evidence for Intelligent Design”) that human engineers might someday construct a self-replicating machine. But even if they do, that will not show that life could have arisen through natural processes. It will only have shown that it could have arisen through design. 


Design by Duplication Errors

Anyway, as I wrote there, even if we could create self-replicating cars, who could seriously believe that the duplication errors made as they reproduced themselves could ever lead to major advances? (And even intelligent, conscious machines eventually.) Surely an unimaginably complex machine like a self-replicating car could only be damaged by such errors, even when filtered through natural selection. We are so used to seeing animals and plants reproduce themselves with minimal degradation from generation to generation that we don’t realize how astonishing this really is. We really have no idea how living things are able to pass their current complex structures on to their descendants, much less how they could evolve even more complex structures.


When mathematicians have a simple, clear proof of a theorem, and a long, complicated counterargument, full of unproven assumptions and questionable arguments, we accept the simple proof, even before we find the errors in the complicated counterargument. The argument for intelligent design could not be simpler or clearer: unintelligent forces alone cannot rearrange atoms into computers and airplanes and nuclear power plants and smart phones, and any attempt to explain how they can must fail somewhere because they obviously can’t. Since many scientists are not impressed by such simple arguments, my post was an attempt to point out some of the errors in the materialist’s three-step explanation for how they could. And to say that all three steps are full of unproven assumptions and questionable arguments is quite an understatement. 


At the least, it should now be clear that while science may be able to explain everything that has happened on other planets by appealing only to the unintelligent forces of nature, trying to explain the origin and evolution of life on Earth is a much more difficult problem, and intelligent design should at least be counted among the views that are allowed to be heard. Indeed, this is already starting to happen. 

Yet another strawman bully?

 Jason Rosenhouse and “Mathematical Proof”

William A. Dembski


I am reviewing Jason Rosenhouse’s new book, The Failures of Mathematical Anti-Evolutionism (Cambridge University Press), serially. For the full series so far, go here.


A common rhetorical ploy is to overstate an opponent’s position so much that it becomes untenable and even ridiculous. Jason Rosenhouse deploys this tactic repeatedly throughout his book. Design theorists, for instance, argue that there’s good evidence to think that the bacterial flagellum is designed, and they see mathematics as relevant to making such an evidential case. Yet with reference to the flagellum, Rosenhouse writes, “Anti-evolutionists make bold, sweeping claims that some complex system [here, the flagellum] could not have arisen through evolution. They tell the world they have conclusive mathematical proof of this.” (p. 152) I am among those who have made a mathematical argument for the design of the flagellum. And so, Rosenhouse levels that charge specifically against me: “Dembski claims his methods allow him to prove mathematically that evolution has been refuted …” (p. 136)


Rosenhouse, as a mathematician, must at some level realize that he’s prevaricating. It’s one thing to use mathematics in an argument. It’s quite another to say that one is offering a mathematical proof. The latter is much, much stronger than the former, and Rosenhouse knows the difference. I’ve never said that I’m offering a mathematical proof that systems like the flagellum are designed. Mathematical proofs leave no room for fallibility or error. Intelligent design arguments use mathematics, but like all empirical arguments they fall short of the deductive certainty of mathematical proof. I can prove mathematically that 6 is a composite number by pointing to 2 and 3 as factors. I can prove mathematically that 7 is a prime number by running through all the numbers greater than 1 and less than 7, showing that none of them divide it. But no mathematical proof that the flagellum is designed exists, and no design theorist that I know has ever suggested otherwise.


Rosenhouse’s Agenda

So, how did Rosenhouse arrive at the conclusion that I’m offering a mathematical proof of the flagellum’s design? I suspect the problem is Rosenhouse’s agenda, which is to discredit my work on intelligent design irrespective of its merit. Rosenhouse has no incentive to read my work carefully or to portray it accurately. For instance, he seizes on a probabilistic argument that I make for the flagellum’s design in my 2002 book No Free Lunch, characterizing it as a mathematical proof, and a failed one at that. But he has no possible justification for calling what I do there a mathematical proof. Note how I wrap up that argument — the very language used is as far from a mathematical proof as one can find (and I’ve proved my share of mathematical theorems, so I know):


Although it may seem as though I have cooked these numbers, in fact I have tried to be conservative with all my estimates. To be sure, there is plenty of biological work here to be done. The big challenge is to firm up these numbers and make sure they do not cheat in anybody’s favor. Getting solid, well-confirmed estimates for perturbation tolerance and perturbation identity factors [used to estimate probabilities gauging evolvability] will require careful scientific investigation. Such estimates, however, are not intractable. Perturbation tolerance factors can be assessed empirically by random substitution experiments where one, two, or a few substitutions are made. 


NO FREE LUNCH, PP. 301–302

Obviously, I’ve used mathematics here to make an argument. But equally obviously, I’m not claiming to have provided a mathematical proof. In the section where this quote appears, I’m laying out various mathematical and probabilistic techniques that can be used to make an evidential case for the flagellum’s design. It’s not a mathematical proof but an evidential argument, and not even a full-fledged evidential argument so much as a template for such an argument. In other words, I’m laying out what such an argument would look like if one filled in the biological and probabilistic details. 


All or Nothing

As such, the argument falls short of deductive certainty. Mathematical proof is all or nothing. Evidential support comes in degrees. The point of evidential arguments is to increase the degree of support for a claim, in this case for the claim that the flagellum is intelligently designed. A dispassionate reader would regard my conclusion here as measured and modest. Rosenhouse’s refutation, by contrast, is to set up a strawman, so overstating the argument that it can’t have any merit.


The reference to perturbation tolerance and perturbation identity factors here refers to the types of neighborhoods that are relevant to evolutionary pathways. Such neighborhoods and pathways were the subject of the two previous posts in this review series. These perturbation factors are probabilistic tools for investigating the evolvability of systems like the flagellum. They presuppose some technical sophistication, but their point is to try honestly to come to terms with the probabilities that are actually involved with real biological systems. 


At this point, Rosenhouse might feign shock, suggesting that I give the impression of presenting a bulletproof argument for the design of the flagellum, but that I’m now backpedaling, only to admit that the probabilistic evidence for the design of the flagellum is tentative. But here’s what’s actually happening. Mike Behe, in defining irreducible complexity, has identified a class of biological systems (those that are irreducibly complex) that resist Darwinian explanations and that implicate design. At the same time, there’s also this method for inferring design developed by Dembski. What happens if that method is applied to irreducibly complex systems? Can it infer design for such systems? That’s the question I’m trying to answer, and specifically for the flagellum.


Begging the Question?

Since the design inference, as a method, infers design by identifying what’s called specified complexity (more on this is coming up), Rosenhouse claims that my argument begs the question. Thus, I’m supposed to be presupposing that irreducible complexity makes it impossible for a system to evolve by Darwinian means. And from there I’m supposed to conclude that it must be highly improbable that it could evolve by Darwinian means (if it’s impossible, then it’s improbable). But that’s not what I’m doing. Instead, I’m using irreducible complexity as a signpost of where to look for biological improbability. Specifically, I’m using particular features of an irreducibly complex system like the bacterial flagellum to estimate probabilities related to its evolvability. I conclude, in the case of the flagellum, that those probabilities seem low and warrant a design inference. 


Now I might be wrong (that’s why I say the numbers need to be firmed up and we need to make sure no one is cheating). To this day, I’m not totally happy with the actual numbers in the probability calculation for the bacterial flagellum as presented in my book No Free Lunch. But that’s no reason for Rosenhouse and his fellow Darwinists to celebrate. The fact is that they have no probability estimates at all for the evolution of these systems. Worse yet, because they are so convinced that these systems evolved by Darwinian means, they know in advance, simply from their armchairs, that the probabilities must be high. The point of that section in No Free Lunch was less to do a definitive calculation for the flagellum as to lay out the techniques for calculating probabilities in such cases (such as the perturbation probabilities). 


In his book, Rosenhouse claims that I have “only once tried to apply [my] method to an actual biological system” (p. 137), that being to the flagellum in No Free Lunch. And, obviously, he thinks I failed in that regard. But as it is, I have applied the method elsewhere, and with more convincing numbers. See, for instance, my analysis of Doug Axe’s investigation into the evolvability of enzyme folds in my 2008 book The Design of Life (co-authored with Jonathan Wells; see chapter seven). My design inferential method yields much firmer conclusions there than for the flagellum for two reasons: (1) the numbers come from the biology as calculated by biologists (in this case, the biologist is Axe), and (2) the systems in question (small enzymatic proteins with 150 or so amino acids) are much easier to analyze than big molecular machines like the flagellum, which have tens of thousands of protein subunits. 


Hiding Behind Complexities

Darwinists have always hidden behind the complexities of biological systems. Instead of coming to terms with the complexities, they turn the tables and say: “Prove us wrong and show that these systems didn’t evolve by Darwinian means.” As always, they assume no burden of proof. Given the slipperiness of the Darwinian mechanism, in which all interesting evolution happens by co-option and coevolution, where structures and functions must both change in concert and crucial evolutionary intermediates never quite get explicitly identified, Darwinists have essentially insulated their theory from challenge. So the trick for design theorists looking to apply the design inferential method to actual biological systems is to find a Goldilocks zone in which a system is complex enough to yield design if the probabilities can be calculated and yet simple enough for the probabilities actually to be calculated. Doug Axe’s work is, in my view, the best in this respect. We’ll return to it since Axe also comes in for criticism from Rosenhouse.


Next, “Jason Rosenhouse and Specified Complexity.”


Editor’s note: This review is cross-posted with permission of the author from BillDembski.com.

Peace: JEHOVAH'S gift to his people.

Malachi3:18NIV"And you will again see the distinction between the righteous and the wicked, between those who serve God and those who do not."

1John3:10NIV"This is how we know who the children of God are and who the children of the devil are: Anyone who does not do what is right is not God’s child, nor is anyone who does not love their brother and sister."

Micah4:1-3ASV"But in the latter days it shall come to pass, that the mountain of Jehovah's house shall be established on the top of the mountains, and it shall be exalted above the hills; and peoples shall flow unto it.


2And many nations shall go and say, Come ye, and let us go up to the mountain of Jehovah, and to the house of the God of Jacob; and he will teach us of his ways, and we will walk in his paths. For out of Zion shall go forth the law, and the word of Jehovah from Jerusalem;


3and he will judge between many peoples, and will decide concerning strong nations afar off: and they shall beat their swords into plowshares, and their spears into pruning-hooks; nation shall not lift up sword against nation, neither shall they learn war any more."

Peace is the metric by which a distinction is to be made not merely between the individual who has truly dedicated himself to JEHOVAH'S service and the one whose profession of such a dedication is questionable. But the people who are truly in a covenant relationship with the one true God and the churches whose profession of such a relationship cannot withstand unbiased scrutiny. 

Wednesday, 22 June 2022

Darwinism's deafening silence on a plausible path to new organs.

 The Silence of the Evolutionary Biologists

William A. Dembski

I am reviewing Jason Rosenhouse’s new book, The Failures of Mathematical Anti-Evolutionism (Cambridge University Press), serially. For the full series so far, go here.


The Darwinian community has been strikingly unsuccessful in showing how complex biological adaptations evolved, or even how they might have evolved, in terms of detailed step-by-step pathways between different structures performing different functions (pathways that must exist if Darwinian evolution holds). Jason Rosenhouse admits the problem when he says that Darwinians lack “direct evidence” of evolution and must instead depend on “circumstantial evidence.” (pp. 47–48) He elaborates: “As compelling as the circumstantial evidence for evolution is, it would be better to have direct experimental confirmation. Sadly, that is impossible. We have only the one run of evolution on this planet to study, and most of the really cool stuff happened long ago.” (p. 208) How very convenient. 


Design theorists see the lack of direct evidence for Darwinian processes creating all that “cool stuff” — in the ancient past no less — as a problem for Darwinism. Moreover, they are unimpressed with the circumstantial evidence that convinces Darwinists that Darwin got it right. Rosenhouse, for instance, smugly informs his readers that “eye evolution is no longer considered to be especially mysterious.” (p. 54) It’s not that the human eye and the visual cortex with which it is integrated are even remotely well enough understood to underwrite a realistic model of how the human eye might have evolved. The details of eye evolution, if such details even exist, remain utterly mysterious.


A Crude Similarity Metric

Instead, Rosenhouse does the only thing that Darwinists can do when confronted with the eye: point out that eyes of many different complexities exist in nature, relate them according to some crude similarity metric (whether structurally or genetically), and then simply posit that gradual step-by-step evolutionary paths connecting them exist (perhaps by drawing arrows to connect similar eyes). Sure, Darwinists can produce endearing computer models of eye evolution (what two virtual objects can’t be made to evolve into each other on a computer?). And they can look for homologous genes and proteins among differing eyes (big surprise that similar structures may use similar proteins). But eyes have to be built in embryological development, and eyes evolving by Darwinian means need a step-by-step path to get from one to the other. No such details are ever forthcoming. Credulity is the sin of Darwinists.


Intelligent design’s scientific program can thus, at least in part, be viewed as an attempt to unmask Darwinist credulity. The task, accordingly, is to find complex biological systems that convincingly resist a gradual step-by-step evolution. Alternatively, it is to find systems that strongly implicate evolutionary discontinuity with respect to the Darwinian mechanism because their evolution can be seen to require multiple coordinated mutations that cannot be reduced to small mutational steps. Michael Behe’s irreducibly complex molecular machines, such as the bacterial flagellum, described in his 1996 book Darwin’s Black Box, provided a rich set of examples for such evolutionary discontinuity. By definition, a system is irreducibly complex if it has core components for which the removal of any of them causes it to lose its original function.


No Plausible Pathways

Interestingly, in the two and a half decades since Behe published that book, no convincing, or even plausible, detailed Darwinian pathways have been put forward to explain the evolution of these irreducibly complex systems. The silence of evolutionary biologists in laying out such pathways is complete. Which is not to say that they are silent on this topic. Darwinian biologists continue to proclaim that irreducibly complex biochemical systems like the bacterial flagellum have evolved and that intelligent design is wrong to regard them as designed. But such talk lacks scientific substance.


Next, “From Darwinists, a Shift in Tone on Nanomachines.”


Editor’s note: This review is cross-posted with permission of the author from BillDembski.com.

For Darwinism humor is no laughing matter.

 There’s Nothing Funny About Evolution

Geoffrey Simmons


Much like the genetic blueprints given to each of us at conception, blueprints for pumping blood, exchanging carbon dioxide for oxygen, digesting food, eliminating food, and retaining memories, we come with a built-in sense of humor. Could our sense of humor have evolved, meaning come about by millions of tiny, modifying, successive steps over millions of years? Or, did it arrive in one lump sum, by design? There are good reasons to suspect the latter.  But first some background musings.


For one thing, genetic studies suggest those folks with a better sense of humor have a shorter allele of gene 5-HTTLPR. In addition, we know there are many physiological benefits to laughter. Oxygenation is increased, cardiac function is improved, stress hormones, such as cortisol and adrenaline, are reduced, the immune system is charged up, and the dopaminergic system, which fights depression, is strengthened.


Norman Cousins, a past Adjunct Professor at UCLA, in his book Anatomy of an Illness as Perceived by the Patient, and in an article in The New England Journal of Medicine, wrote about how he lowered his pain levels from ankylosing spondylitis, from a 10 to a 2. Ten minutes of laughter gave him two hours of pain-free sleep. Much of this laughter came from watching TV. Nowadays, if one is over 13 years old, one might need to find a different medium.


We’re told that laughing 100 times is equal to 10 minutes on a rowing machine or 15 minutes on an exercise bike. Perhaps one could frequent a comedy club nightly and skip those painful, daily exercises. Humor helps us when times are stressful, when we’re courting, and when we’re depressed. Students enjoy their teachers, pay more attention, and remember more information when humor is added to classroom instruction. Humor promotes better bonding between student and teacher, and between most couples. It also helps with hostage negotiations.


A Darwinian Scenario

If our sense of humor came about by tiny steps, like other functions, as proposed by Charles Darwin, scientists have yet to find proof of it. Think of it: can hearing the beginning words of a joke even be funny? Is there any benefit to survival with one-word jokes that eventually become two- and three-word jokes? I, doubt it, but that’s just my personal opinion. 


Fish talk by means of gestures, electrical impulses, bioluminescence, and sounds like hard-to-hear purrs, croaks, and pops. But, did they (or could they) bring their jokes ashore millions of years ago? Of course, there’s no evidence of that. Yet? Just maybe one might envision the fish remaining in the water teasing the more adventuresome fish about their ooohs and aahs, issued while walking across burning-hot sands. 


Tickling a Rat

Laughing while being tickled is not the same as having a sense of humor. The response to someone reaching into one’s armpit is a neurological and physiological reaction to being touched. For some, tickling is torture. I had one rather serious female patient, who, when undressed and covered with a sheet, was ticklish from her neck to her toes. She was nearly impossible to examine. Sometimes she would start laughing as I approached her.


One can tickle a rat, and given the right equipment, record odd utterances that might be laughter. But it might easily be profanity. Some say one can tickle a sting ray, but others say the animal is suffocating. Attempts to tickle a crocodile and other wild animals have not been conducted, as far as I’m aware, in any depth. Also, such attempts are not recommended.


Laughing is clearly part of the human package, part of our design. As I see it, there can only be two possible origins. Humor evolved very, very slowly, or it came about more quickly by intelligent design. Negative feedback loops might argue against the slow development. Some fringe thinkers might speculate that extraterrestrials passed on their sense of humor to us, millions of years ago, but, if so, jokes about the folks in the Andromeda galaxy are on a different wavelength. Jokes about Uranus, of course, are local.


Sorry About that Last One, Folks

A sense of humor varies from person to person, much like height, weight, and abdominal girth. Plus, there are gender differences. Women like men who make them laugh; men like women who laugh at their jokes. Comedians say a sense of humor is a mating signal indicating high intelligence. People on Internet dating sites often ask each other about their sense of humor. Of course, we all have great senses of humor. Just ask anyone.


A sense of humor is often highly valued. Couples get along better when they have similar senses of humor. Mutation is more likely to ruin a good joke than help it. A serious mutation might take out the entire punchline. Jokes about a partner’s looks or clothes are to be avoided. They might lead to domestic abuse. Happy tears are chemically different from sad tears. Both are different from the tears that cleanse the eye with each blink or react to infections. Can anyone explain that? Could specific tears have come about by accident?


We know laughing is a normal human activity. Some days are better than others. Human babies often smile and giggle before they are two months old, years before they will understand a good riddle. Deaf and blind babies smile and giggle at virtually that same age. Is that present to make them more lovable? Children laugh up to 400 times a day, adults only 15 times per day. This could mean we need to hear many more jokes on a daily basis.


What Humor Means

 We all think we know what humor means, but because it can vary among people, we really don’t. An amusing joke told man-to-man might be a nasty joke if told man-to-woman. Or, the other way around. Humor tends to be intangible. It’s somewhat like certain foods tasting good to you, but maybe not to me. Too salty versus needs more salt? Or sweetener? I once told my medical partner that my wife and I had just seen the funniest movie we had ever seen. He and his wife went out that very night to see it and didn’t find anything in it funny. Nothing at all! Not even the funniest scene I have ever seen in a movie. Go figure. 


What does having a good sense of humor mean? Might it be reciting a lot of relevant jokes from a repository, making up funny quips during conversations, or laughing a lot at most anything except someone else’s pain? Or a mix?


There’s a laughter-like sound that is made by chimps, bonobos, and gorillas while playing. But does it mean there’s a sense of humor at work, or monkey profanity? They might be calling each other bad names. Octopuses play but don’t smile orlaugh, we think. Dolphins “giggle” using different combinations of whistles and clicks. It does seem like they are laughing at times, but nobody knows for sure. Maybe it’s just a case of anthropomorphizing. The dolphin family has been around approximately 11 million years and the area of their brain that processes language is much larger than ours. They’ve had plenty of time to come up with several good ones.


Koko the Humorous Gorilla

Perhaps, the most interesting case was Koko the gorilla who was taught to sign. She recently died after 46 years. Her vocabulary was at least 1,000 words by signing and another 2,000 words by hearing. Some say she was a jokester. She loved Robin Williams. Maybe adored him. The two would play together for hours. Koko seemed to make up jokes. She once tore the sink out of the wall in her cage; when asked about it, she signed that her pet cat did it. However, the cat wasn’t tall enough.


 So I ask again, could a sense of humor have come about by numerous, successive, slight modifications, a Darwinian requirement? If humor fails that test, might humor be the elusive coup de grace for naturalism? Since irreducible complexity, specified complexity, and topoisomerases haven’t landed the KO to Darwin’s weakening theories, might the answer just be as simple as laughing at them?


If a sense of humor were just a variation on tickling, my guess is that comedians would come off the stage or hire teenagers to walk among their audiences to tickle everyone. Imagine being dressed up for the night, maybe eating a fancy meal or drinking expensive champagne, and some grubby kid, who’s paid minimum wage, is reaching into your armpits.


Why Laugh at All? 

Is a sense of humor a byproduct, an accident, or was it installed on purpose? For better health? There definitely seems to be a purpose. Could it be a coping mechanism? Is it the way to meet the right mate? Surely, that must be part of it.


The only evolution-related quip I could think of sums up this discussion rather well:


A little girl asked her mother, “How did the human race come about?”


The mother answered, “God made Adam and Eve. They had children, and so all mankind was made.”


A few days later, the little girl asked her father the same question. The father answered, “Many years ago there were apelike creatures, and we developed from them.”


The confused girl returned to her mother and said, “Mom, how is it possible that you told me that the human race was created by God , and Papa says we developed from ‘apelike creatures’?”


The mother answered, “Well, dear, it is very simple. I told you about the origin of my side of the family, and your father told you about his.”

Man does not compute?

 The Non-Computable Human

Robert J. Marks II


Editor’s note: We are delighted to present an excerpt from Chapter 1 of the new book Non-Computable You: What You Do that Artificial Intelligence Never Will, by computer engineer Robert J. Marks, director of Discovery Institute’s Bradley Center for Natural and Artificial Intelligence.


If you memorized all of Wikipedia, would you be more intelligent? It depends on how you define intelligence. 


Consider John Jay Osborn Jr.’s 1971 novel The Paper Chase. In this semi-autobiographical story about Harvard Law School, students are deathly afraid of Professor Kingsfield’s course on contract law. Kingfield’s classroom presence elicits both awe and fear. He is the all-knowing professor with the power to make or break every student. He is demanding, uncompromising, and scary smart. In the iconic film adaptation, Kingsfield walks into the room on the first day of class, puts his notes down, turns toward his students, and looms threateningly.


“You come in here with a skull full of mush,” he says. “You leave thinking like a lawyer.” Kingsfield is promising to teach his students to be intelligent like he is. 


One of the law students in Kingsfield’s class, Kevin Brooks, is gifted with a photographic memory. He can read complicated case law and, after one reading, recite it word for word. Quite an asset, right?


Not necessarily. Brooks has a host of facts at his fingertips, but he doesn’t have the analytic skills to use those facts in any meaningful way.


Kevin Brooks’s wife is supportive of his efforts at school, and so are his classmates. But this doesn’t help. A tutor doesn’t help. Although he tries, Brooks simply does not have what it takes to put his phenomenal memorization skills to effective use in Kingsfield’s class. Brooks holds in his hands a million facts that because of his lack of understanding are essentially useless. He flounders in his academic endeavor. He becomes despondent. Eventually he attempts suicide. 


Knowledge and Intelligence

This sad tale highlights the difference between knowledge and intelligence. Kevin Brooks’s brain stored every jot and tittle of every legal case assigned by Kingsfield, but he couldn’t apply the information meaningfully. Memorization of a lot of knowledge did not make Brooks intelligent in the way that Kingsfield and the successful students were intelligent. British journalist Miles Kington captured this distinction when he said, “Knowing a tomato is a fruit is knowledge. Intelligence is knowing not to include it in a fruit salad.”


Which brings us to the point: When discussing artificial intelligence, it’s crucial to define intelligence. Like Kevin Brooks, computers can store oceans of facts and correlations; but intelligence requires more than facts. True intelligence requires a host of analytic skills. It requires understanding; the ability to recognize humor, subtleties of meaning, and symbolism; and the ability to recognize and disentangle ambiguities. It requires creativity.


Artificial intelligence has done many remarkable things. AI has largely replaced travel agents, tollbooth attendants, and mapmakers. But will AI ever replace attorneys, physicians, military strategists, and design engineers, among others?


The answer is no. And the reason is that as impressive as artificial intelligence is — and make no mistake, it is fantastically impressive — it doesn’t hold a candle to human intelligence. It doesn’t hold a candle to you.


And it never will. How do we know? The answer can be stated in a single four-syllable word that needs unpacking before we can contemplate the non-computable you. That word is algorithm. If not expressible as an algorithm, a task is not computable.


Algorithms and the Computable

An algorithm is a step-by-step set of instructions to accomplish a task. A recipe for German chocolate cake is an algorithm. The list of ingredients acts as the input for the algorithm; mixing the ingredients and following the baking and icing instructions will result in a cake.


Likewise, when I give instructions to get to my house, I am offering an algorithm to follow. You are told how far to go and which direction you are to turn on what street. When Google Maps returns a route to go to your destination, it is giving you an algorithm to follow. 


Humans are used to thinking in terms of algorithms. We make grocery lists, we go through the morning procedure of showering, hair combing, teeth brushing, and we keep a schedule of what to do today. Routine is algorithmic. Engineers algorithmically apply Newton’s laws of physics when designing highway bridges and airplanes. Construction plans captured on blueprints are part of an algorithm for building. Likewise, chemical reactions follow algorithms discovered by chemists. And all mathematical proofs are algorithmic; they follow step-by-step procedures built on the foundations of logic and axiomatic presuppositions. 


Algorithms need not be fixed; they can contain stochastic elements, such as descriptions of random events in population genetics and weather forecasting. The board game Monopoly, for example, follows a fixed set of rules, but the game unfolds through random dice throws and player decisions.


Here’s the key: Computers only do what they’re programmed by humans to do, and those programs are all algorithms — step-by-step procedures contributing to the performance of some task. But algorithms are limited in what they can do. That means computers, limited to following algorithmic software, are limited in what they can do.


This limitation is captured by the very word “computer.” In the world of programmers, “algorithmic” and “computable” are often used interchangeably. And since “algorithmic” and “computable” are synonyms, so are “non-computable” and “non-algorithmic.”


Basically, for computers — for artificial intelligence — there’s no other game in town. All computer programs are algorithms; anything non-algorithmic is non-computable and beyond the reach of AI.


But it’s not beyond you. 


Non-Computable You

Humans can behave and respond non-algorithmically. You do so every day. For example, you perform a non-algorithmic task when you bite into a lemon. The lemon juice squirts on your tongue and you wince at the sour flavor. 


Now, consider this: Can you fully convey your experience to a man who was born with no sense of taste or smell? No. You cannot. The goal is not a description of the lemon-biting experience, but its duplication. The lemon’s chemicals and the mechanics of the bite can be described to the man, but the true experience of the lemon taste and aroma cannot be conveyed to someone without the necessary senses.


If biting into a lemon cannot be explained to a man without all his functioning senses, it certainly can’t be duplicated in an experiential way by AI using computer software. Like the man born with no sense of taste or smell, machines do not possess qualia — experientially sensory perceptions such as pain, taste, and smell. 


Qualia are a simple example of the many human attributes that escape algorithmic description. If you can’t formulate an algorithm explaining your lemon-biting experience, you can’t write software to duplicate the experience in the computer.


Or consider another example. I broke my wrist a few years ago, and the physician in the emergency room had to set the broken bones. I’d heard beforehand that bone-setting really hurts. But hearing about pain and experiencing pain are quite different. 


To set my broken wrist, the emergency physician grabbed my hand and arm, pulled, and there was an audible crunching sound as the bones around my wrist realigned. It hurt. A lot. I envied my preteen grandson, who had been anesthetized when his broken leg was set. He slept through his pain.


Is it possible to write a computer program to duplicate — not describe, but duplicate — my pain? No. Qualia are not computable. They’re non-algorithmic.


By definition and in practice, computers function using algorithms. Logically speaking, then, the existence of the non-algorithmic suggests there are limits to what computers and therefore AI can do.

Darwinists attempt to correct God again.

 From Darwinists, a Shift in Tone on Nanomachines

William A. Dembski


I am reviewing Jason Rosenhouse’s new book, The Failures of Mathematical Anti-Evolutionism (Cambridge University Press), serially. For the full series so far, go here.


Unfortunately for Darwinists, irreducible complexity raises real doubts about Darwinism in people’s minds. Something must be done. Rising to the challenge, Darwinists are doing what must be done to control the damage. Take the bacterial flagellum, the poster child of irreducibly complex biochemical machines. Whatever biologists may have thought of its ultimate origins, they tended to regard it with awe. Harvard’s Howard Berg, who discovered that flagellar filaments rotate to propel bacteria through their watery environments, would in public lectures refer to the flagellum as “the most efficient machine in the universe.” (And yes, I realize there are many different bacteria sporting many different variants of the flagellum, including the souped-up hyperdrive magnetotactic bacteria, which swim ten times faster than E. coli — E. coli’s flagellum, however, seems to be the one most studied.)

Why “Machines”?

In 1998, writing for a special issue of Cell, the National Academy of Sciences president at the time, Bruce Alberts, remarked:


We have always underestimated cells… The entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of a set of large protein machines… Why do we call the large protein assemblies that underlie cell function protein machines? Precisely because, like machines invented by humans to deal efficiently with the macroscopic world, these protein assemblies contain highly coordinated moving parts. [Emphasis in the original.]


A few years later, in 2003, Adam Watkins, introducing a special issue on nanomachines for BioEssays, wrote: 


The articles included in this issue demonstrate some striking parallels between artifactual and biological/molecular machines. In the first place, molecular machines, like man-made machines, perform highly specific functions. Second, the macromolecular machine complexes feature multiple parts that interact in distinct and precise ways, with defined inputs and outputs. Third, many of these machines have parts that can be used in other molecular machines (at least, with slight modification), comparable to the interchangeable parts of artificial machines. Finally, and not least, they have the cardinal attribute of machines: they all convert energy into some form of ‘work’.


Neither of these special issues offered detailed step-by-step Darwinian pathways for how these machine-like biological systems might have evolved, but they did talk up their design characteristics. I belabor these systems and the special treatment they received in these journals because none of the mystery surrounding their origin has in the intervening years been dispelled. Nonetheless, the admiration that they used to inspire has diminished. Consider the following quote about the flagellum from Beeby et al.’s 2020 article on propulsive nanomachines. Rosenhouse cites it approvingly, prefacing the quote by claiming that the flagellum is “not the handiwork of a master engineer, but is more like a cobbled-together mess of kludges” (pp. 151–152):


Many functions of the three propulsive nanomachines are precarious, over-engineered contraptions, such as the flagellar switch to filament assembly when the hook reaches a pre-determined length, requiring secretion of proteins that inhibit transcription of filament components. Other examples of absurd complexity include crude attachment of part of an ancestral ATPase for secretion gate maturation, and the assembly of flagellar filaments at their distal end. All cases are absurd, and yet it is challenging to (intelligently) imagine another solution given the tools (proteins) to hand. Indeed, absurd (or irrational) design appears a hallmark of the evolutionary process of co-option and exaptation that drove evolution of the three propulsive nanomachines, where successive steps into the adjacent possible function space cannot anticipate the subsequent adaptations and exaptations that would then become possible. 


The shift in tone from then to now is remarkable. What happened to the awe these systems used to inspire? Have investigators really learned so much in the intervening years to say, with any confidence, that these systems are indeed over-engineered? To say that something is over-engineered is to say that it could be simplified without loss of function (like a Rube Goldberg device). And what justifies that claim here? Have scientists invented simpler systems that in all potential environments perform as well as or better than the systems in question? Are they able to go into existing flagellar systems, for instance, and swap out the over-engineered parts with these more efficient (sub)systems? Have they in the intervening years gained any real insight into the step-by-step evolution of these systems? Or are they merely engaged in rhetoric to make flagellar motors seem less impressive and thus less plausibly the product of design? To pose these questions is to answer them.


A Quasi-Humean Spirit

Rosenhouse even offers a quasi-Humean anti-design argument. Humans are able to build things like automobiles, but not things like organisms. Accordingly, ascribing design to organisms is an “extravagant extrapolation” from “causes now in operation.” Rosenhouse’s punchline: “Based on our experience, or on comparisons of human engineering to the natural world, the obvious conclusion is that intelligence cannot at all do what they [i.e., ID proponents] claim it can do. Not even close. Their argument is no better than saying that since moles are seen to make molehills, mountains must be evidence for giant moles.” (p. 273) 


Seriously?! As Richard Dawkins has been wont to say, “This is a transparently feeble argument.” So, primitive humans living with stone-age technology, if they were suddenly transported to Dubai, would be unable to get up to speed and recognize design in the technologies on display there? Likewise, we, confronted with space aliens whose technologies can build organisms using ultra-advanced 3D printers, would be unable to recognize that they were building designed objects? I intend these statements as rhetorical questions whose answer is obvious. What underwrites our causal explanations is our exposure to and understanding of the types of causes now in operation, not the idiosyncrasies of their operation. Because we are designers, we can appreciate design even if we are unable to replicate the design ourselves. Lost arts are lost because we are unable to replicate the design, not because we are unable to recognize the design. Rosenhouse’s quasi-Humean anti-design argument is ridiculous.


Next, “Darwinist Turns Math Cop: Track 1 and Track 2.”


Editor’s note: This review is cross-posted with permission of the author from BillDembski.com.

Tuesday, 21 June 2022

The enemy of my enemy..?


At the house next door: No one's home?

 New Analysis Casts Doubt on Claims for Life on Venus

Evolution News @DiscoveryCSC


A new study throws cold water (vapor?) on an earlier paper that suggested that aerial life forms could exist in Venus’s massive cloud cover:


Researchers from the University of Cambridge used a combination of biochemistry and atmospheric chemistry to test the ‘life in the clouds’ hypothesis, which astronomers have speculated about for decades, and found that life cannot explain the composition of the Venusian atmosphere.


Any life form in sufficient abundance is expected to leave chemical fingerprints on a planet’s atmosphere as it consumes food and expels waste. However, the Cambridge researchers found no evidence of these fingerprints on Venus. 


UNIVERSITY OF CAMBRIDGE, “NO SIGNS (YET) OF LIFE ON VENUS” AT SCIENCE DAILY (JUNE 14, 2022) THE PAPER IS OPEN ACCESS.

The contention in the earlier paper was that chemicals present in Venus’s clouds are consistent with production by life forms.


Not a Biosignature

Although the authors of the study published last week, Jordan Chortle and P. B. Rimmer, say that the specifics of Venus’s atmospheric chemistry are not a biosignature (evidence of life), they stress that the atmosphere on Venus is nonetheless “strange.”

They hope that their work will assist in identifying other promising sites for extraterrestrial life:


”To understand why some planets are alive, we need to understand why other planets are dead,” said Shorttle. “If life somehow managed to sneak into the Venusian clouds, it would totally change how we search for chemical signs of life on other planets.”


“Even if ‘our’ Venus is dead, it’s possible that Venus-like planets in other systems could host life,” said Rimmer, who is also affiliated with Cambridge’s Cavendish Laboratory. “We can take what we’ve learned here and apply it to exoplanetary systems — this is just the beginning.”

They hope their method of analysis will prove a help later this year when the James Webb Space Telescope starts returning images of planets outside our solar system.


Read the rest at Mind Matters News, published by Discovery Institute’s Bradley Center for Natural and Artificial Intelligence.



Paleo Darwinism V. evolution in general?

 Jason Rosenhouse, a Crude Darwinist

William A. Dembski


I am reviewing Jason Rosenhouse’s new book, The Failures of Mathematical Anti-Evolutionism (Cambridge University Press), serially. For the full series so far, go here.


For Rosenhouse, Darwin can do no wrong and Darwin’s critics can do no right. As a fellow mathematician, I would have liked to see from Rosenhouse a vigorous and insightful discussion of my ideas, especially where there’s room for improvement, as well as some honest admission of why neo-Darwinism falls short as a compelling theory of biological evolution and why mathematical criticisms of it could at least have some traction. Instead, Rosenhouse assumes no burden of proof, treating Darwin’s theory as a slam dunk and treating all mathematical criticisms of Darwin’s theory as laughable. Indeed, he has a fondness for the word “silly,” which he uses repeatedly, and according to him mathematicians who use math to advance intelligent design are as silly as they come.


Anti-Evolutionism or Anti-Darwinism?

In using the phrase “mathematical anti-evolutionism,” Rosenhouse mistitled his book. Given its aim and arguments, it should have been titled The Failures of Mathematical Anti-Darwinism. Although design theorists exist who reject the transformationism inherent in evolutionism (I happen to be one of them), intelligent design’s beef is not with evolution per se but with the supposed naturalistic mechanisms driving evolution. And when it comes to naturalistic mechanisms driving evolution, there’s only one game in town, namely, neo-Darwinism, which I’ll refer to simply as Darwinism. In any case, my colleague Michael Behe, who also comes in for criticism from Rosenhouse, is an evolutionist. Behe accepts common descent, the universal common ancestry of all living things on planet earth. And yet Behe is not a Darwinist — he sees Darwin’s mechanism of natural selection acting on random variations as having at best very limited power to explain biological innovation. 


Reflexive Darwinism

Rosenhouse is a Darwinist, and a crude reflexive one at that. For instance, he will write: “Evolution only cares about brute survival. A successful animal is one that inserts many copies of its genes into the next generation, and one can do that while being not very bright at all.” (p. 14) By contrast, more nuanced Darwinists (like Robert Wright) will stress how Darwinian processes can enhance cooperation. Others (like Geoffrey Miller) will stress how sexual selection can put a premium on intelligence (and thus on “being bright”). But Rosenhouse’s Darwinism plays to the lowest common denominator. Throughout the book, he hammers on the primacy of natural selection and random variation, entirely omitting such factors as symbiosis, gene transfer, genetic drift, the action of regulatory genes in development, to say nothing of self-organizational processes.


Rosenhouse’s Darwinism commits him to Darwinian gradualism: Every adaptation of organisms is the result of a gradual step-by-step evolutionary process with natural selection ensuring the avoidance of missteps along the way. Writing about the evolution of “complex biological adaptations,” he notes: “Either the adaptation can be broken down into small mutational steps or it cannot. Evolutionists say that all adaptations studied to date can be so broken down while anti-evolutionists deny this…” (p. 178) At the same time, Rosenhouse denies that adaptations ever require multiple coordinated mutational steps: “[E]volution will not move a population from point A to point B if multiple, simultaneous mutations are required. No one disagrees with this, but in practice there is no way of showing that multiple, simultaneous mutations are actually required.” (pp. 159–160) 


“Mount Improbable”

And why are multiple simultaneous mutations strictly verboten? Because they would render life’s evolution too improbable, making it effectively impossible for evolution to climb Mount Improbable (which is both a metaphor and the title of a book by Richard Dawkins). Simultaneous mutations throw a wrench in the Darwinian gearbox. If they played a significant role in evolution, Darwinian gradualism would become untenable. Accordingly, Rosenhouse maintains that such large-scale mutational changes never happen and are indemonstrable even if they do happen. Rosenhouse presents this point of view not with a compelling argument, but as an apologist intent on neutralizing intelligent design’s threat to Darwinism. 


Next, “The Silence of the Evolutionary Biologists.”


Editor’s note: This review is cross-posted with permission of the author from BillDembski.com.

It looks like technology because it is?

 Physicist Brian Miller: The Fruitful Marriage of Biology and Engineering

David Klinghoffer


Discovery Institute physicist Brian Miller spoke at the recent Dallas Conference on Science and Faith. His theme was “The Surprising Relevance of Engineering in Biology.” 


Afterward, moderated by John West, he took some very thoughtful questions from the audience. Miller notes the fruitful marriage of biology and engineering, as in, for example, the study of control systems: “What you find is parallel research: that biologists are understanding these systems, engineers independently discover these systems, and when they work together they’re looking at the overlap. So, what’s happening now is engineers are learning from biology to do engineering better.” If biology isn’t designed, which is another way of saying “engineered,” wouldn’t this state of affairs be pretty counterintuitive? Enjoy the rest of the Q&A with Dr. Miller:

<iframe width="770" height="433" src="https://www.youtube.com/embed/TH4Woh9S1ig" title="Brian Miller Answers Questions about the Relevance of Engineering to Biology" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

A peacemaker between mathematics and Darwinism?

 The Challenge from Jason Rosenhouse

William A. Dembski


I am reviewing Jason Rosenhouse’s new book, The Failures of Mathematical Anti-Evolutionism (Cambridge University Press), serially. For the full series so far, go here.


To show readers that he means business and that he is a bold, brave thinker, Rosenhouse lays down the gauntlet: “Anti-evolutionists play well in front of friendly audiences because in that environment the speakers never pay the price of being wrong. The response would be a lot chillier if they tried the same arguments in front of audiences with the relevant expertise. Try telling a roomful of mathematicians that you can refute evolutionary theory with a few back-of-the-envelope probability calculations, and see how far you get.” (Epilogue, pp. 270-271)


I’m happy to take up Rosenhouse’s gauntlet. In fact, I already have. I’ve presented my ideas and arguments to roomfuls of not just mathematicians but also biologists and the whole range of scientists on whose disciplines my work impinges. A case in point is a 2014 talk I gave on conservation of information at the University of Chicago, a talk sponsored by my old physics advisor Leo Kadanoff. The entire talk, including Q&A, is available on YouTube:

In such talks, I present quite a bit more detail than a mere back-of-the-envelope probability calculation, though full details, in a single talk (as opposed to a multi-week seminar), require referring listeners to my work in the peer-reviewed literature (none of which Rosenhouse cites in his book). 


My Challenge to Jason Rosenhouse

If I receive a chilly reception in giving such talks, it’s not for any lack of merit in my ideas or work. Rather, it’s the prejudicial contempt evident in Rosenhouse’s challenge above, which is widely shared among Darwinists, who are widespread in the academy. For instance, Rosenhouse’s comrade in arms, evolutionary biologist Jerry Coyne, who is at the University of Chicago, tried to harass Leo into canceling my 2014 talk, but Leo was not a guy to be intimidated — the talk proceeded as planned (Leo sent me copies of the barrage of emails he received from Coyne to persuade him to uninvite me). For the record, I’m happy to debate Rosenhouse, or any mathematicians, engineers, biologists, or whatever, who think they can refute my work. 


Next, “Jason Rosenhouse, a Crude Darwinist.”


Editor’s note: This review is cross-posted with permission of the author from BillDembski.com.

Wednesday, 15 June 2022

Sacrifice without cost?

Chronicles21:24KJV" And king David said to Ornan, Nay; but I will verily buy it for the full price: for I will not take that which is thine for the LORD, nor offer burnt offerings without cost."


King David realized that a cost free sacrifice is in effect no sacrifice at all. Yet is this not the effect that Christendom's theology re: Christ being the God-man and unconditional immortality have on the supposed atonement. Christendom's reductive spiritualism has the effect of rendering the physical body (somos) worse than useless, a prison of rotting flesh that anchors our "real selves" to the ground during our probation on this earth. Surely being liberated from any prison is a blessing and not a sacrifice.


Matthew20:28KJV"Even as the Son of man came not to be ministered unto, but to minister, and to give his life(gk.psyche) a ransom for many." 


Obviously if Christ's real self(soul) was immortal or if he was god-man or both he could not give his soul as a ransom. The mere liberation of his true self from its prison of flesh would constitute no genuine sacrifice. For Christ atonement offering to be genuinely  substitutionary his death would have to be identical in nature to that of the first Adam.


1Corithians15:21KJV"For since by man came death, by man came also the resurrection of the dead. "


And as to nature of the first Adam's death, let's not speculate, but let JEHOVAH'S word be the authority.


Genesis3:19KJV"In the sweat of thy face shalt thou eat bread, till thou RETURN unto the ground; for out of it wast thou taken: for dust thou art, and unto dust shalt thou RETURN."


Thus Adam was to RETURN to his pre-creation state. That is what death meant to Adam. For the second Adam to to serve as a genuine substitute to the first and thus effect an atonement his death MUST have the same significance.