Search This Blog

Tuesday 5 July 2022

What they think they know that Just ain't so?

 Darwinists’ Delusion: Closing Thoughts on Jason Rosenhouse

William A. Dembski

I have been reviewing Jason Rosenhouse’s new book, The Failures of Mathematical Anti-Evolutionism (Cambridge University Press), serially. This is the final post in the review. For the full series, go here.


Would the world be better off if Jason Rosenhouse had never written The Failures of Mathematical Anti-Evolutionism? I, for one, am happy he did write it. It shows what the current state of thinking is by Darwinists on the mathematical ideas that my colleagues and I in the intelligent design movement have developed over the years. In particular, it shows how little progress they’ve made in understanding and engaging with these ideas. It also alerted me to the resurgence of artificial life simulations. Not that artificial life ever went away. But Rosenhouse cites what is essentially a manifesto by 53 authors (including ID critics Christoph Adami, Robert Pennock, and Richard Lenski) that all is well with artificial life: “The Surprising Creativity of Digital Evolution.” (2020) In fact, conservation of information shows that artificial life is a hopeless enterprise. But as my colleague Jonathan Wells underscored in his book Zombie Science, some disreputable ideas are just too pleasing and comforting for Darwinists to disown, and artificial life is one of them. So it was helpful to learn from Rosenhouse about the coming zombie apocalypse.


Selective Criticism

As indicated at the start of this review, I’ve been selective in my criticisms of Rosenhouse’s book, focusing especially on where he addressed my work and on where it impinged on that of some of my close colleagues in the intelligent design movement. I could easily have found more to criticize, but this review is already long. Leaving aside his treatment of young-earth creationists and the Second Law of Thermodynamics, he reflexively repeats Darwinian chestnuts, such as that gene duplication increases information, as though a mere increase in storage capacity can explain biologically useful information (“We’ve doubled the size of your hard drive and you now have twice the information!”). And wherever possible, he tries to paint my colleagues as rubes and ignoramuses. Thus he portrays Stephen Meyer as assuming a simplistic probabilistic model of genetic change when in the original source (Darwin’s Doubt) he is clearly citing an older understanding (by the Wistar mathematicians back in the 1960s) and then makes clear that a newer, more powerful understanding is available today. Disinformation is a word in vogue these days, and it characterizes much of Rosenhouse’s book.


In closing, I want to consider an example that appears near the start of The Failures of Mathematical Anti-Evolutionism (p. 32) and reappears at the very end in the “Coda” (pp. 273–274). It’s typical, when driving on a major street, to have cross streets where one side of the cross street is directly across from the other, and so the traffic on the cross street across the major street is direct. Yet it can happen, more often on country roads, that the cross street involves what seem to be two T-intersections that are close together, and so crossing the major street to stay on the cross street requires a jog in the traffic pattern. 


Rosenhouse is offering a metaphor here, with the first option representing intelligent design, the second Darwinism. According to him, the straight path across the major street represents “a sensible arrangement of roads of the sort a civil engineer would devise” whereas the joggy path represents “an absurd and potentially dangerous arrangement that only makes sense when you understand the historical events leading up to it.” (p. 32) Historical contingencies unguided by intelligence, in which roads are built without coordination, thus explain the second arrangement, and by implication explain biological adaptation.


Rosenhouse grew up near some roads that followed the second arrangement. Recently he learned that in place of two close-by T-intersections, the cross street now goes straight across. He writes:


Apparently, in the years since I left home, that intersection has been completely redesigned. The power that be got tired of cleaning up after the numerous crashes and human misery resulting from the poor design of the roads. So they shut it all down for several months and completely redid the whole thing. Now the arrangement of roads makes perfect sense, and the number of crashes there has declined dramatically. The anti-evolutionists are right about one thing: we really can distinguish systems that were designed from those that evolved gradually. Unfortunately for them, the anatomy of organisms points overwhelmingly toward evolution and just as overwhelmingly from design. (p. 273–274)


A Failed Metaphor

The blindness on display in this passage is staggering, putting on full display the delusional world of Darwinists and contrasting it with the real world that is chock-full of design. Does it really need to be pointed out that roads are designed? That where they go is designed? And that even badly laid out roads are nonetheless laid out by design? But as my colleague Winston Ewert pointed out to me, Rosenhouse’s story doesn’t add up even if we ignore the design that’s everywhere. On page 32, he explains that the highway was built first and then later towns arose on either side of the highway, eventually connecting the crossroads to the highway. But isn’t it obvious, upon the merest reflection, that whoever connected the second road to the highway could have built it opposite to the first road that was already there. So why didn’t they do it? The historical timing of the construction of the roads doesn’t explain it. Something else must be going on.


There are in fact numerous such intersections in the US. Typically they are caused by grid corrections due to the earth’s curvature. In other words, they are a consequence of fitting a square grid onto a spherical earth. Further, such intersections can actually be safer, as a report on staggered junctions by the European Road Safety Decision Support System makes clear. So yes, this example is a metaphor, but not for the power of historical contingency to undercut intelligent design, but for the delusive power of Darwinism to look to historical contingency for explanations that support Darwinism but that under even the barest scrutiny fall apart. 


Enough said. Stay tuned for the second edition of The Design Inference!


Editor’s note: This review is cross-posted with permission of the author from BillDembski.com.


Saturday 2 July 2022

Trouble comes in pairs for Darwinism?

 Günter Bechly: Species Pairs Wreck Darwinism

Evolution News @DiscoveryCSC

On a new episode of ID the Future, distinguished German paleontologist Günter Bechly continues a discussion of his new argument against modern evolutionary theory. According to Bechly, contemporary species pairs diverge hardly at all over millions of years, even when isolated from each other, and yet we’re supposed to believe that the evolutionary process built dramatically distinct body plans in similar time frames at various other times in the history of life. Why believe that? He suggests this pattern of relative stasis among species pairs strikes a significant and damaging blow to Darwinian theory.


In this Part 2 episode, Bechly and host Casey Luskin discuss mice/rat pairs, cattle and bison, horses and donkeys, Asian and African elephants, the Asian black bear and the South American spectacled bear, river hippos and West African pygmy hippos, the common dolphin and the bottle-nosed dolphin, and the one outlier in this pattern, chimpanzees and humans. If chimps and humans really did evolve from a common ancestor, why do they appear to be the lone exception to this pattern of modern species pairs differing in only trivial ways? Bechly notes that whatever one’s explanation, there appears to be clear evidence here of human exceptionalism. He and Luskin go on to cast doubt on the idea that mindless evolutionary processes could have engineered the suite of changes necessary to convert an ape ancestor into upright walking, talking, technology-fashioning human beings.


What about Hawaiian silversword plants? They seem to have evolved into dramatically different body plans in the past few million years. Are these an exception to Bechly’s claimed pattern of species pair stasis? After all, the differences among silverswords can be quite dramatic, with differences far more extensive than what we find between, say, Asian and African elephants or horse and donkey. Drawing on a second article on the topic, he notes that some extant species of plants possess considerable phenotypic plasticity. They have the capacity to change quite dramatically and still breed with other very different varieties. This appears to be the case with silverswords. There is more to his argument. Tune in to hear Dr. Bechly respond to additional objections that Dr. Luskin raises.  Download the podcast or listen to it here. Part 1 of their conversation is here.


Washing their dirty linen in public?

 Donate Darwinism for a Tax Credit? Evolutionists Admit Their Field’s Failures

David Klinghoffer

An article in The Guardian by science journalist Stephen Buryani represents something remarkable in the way the public processes the failures of evolutionary theory. In the past, those failures have been admitted by some biologists…but always in settings (technical journals, conferences) where they thought nobody outside their professional circles was listening. It’s like if a married couple were going through rough times in their relationship. They’d discuss it between themselves, with close friends, maybe with a counselor. But for goodness sake they wouldn’t put it on Facebook, where all marriages are blessed exclusively with good cheer and good fortune. 


Scandalous Admissions

Well, the field of evolutionary biology has just done the equivalent of a massive Facebook dump, admitting that Jim and Sandy, who always seemed so happy, are in fact perilously perched on the rocks. In a very long article, top names in the field share with Buryani what intelligent design proponents already knew, but few Guardian readers guessed. The headline from the left-leaning British daily asks, “Do we need a new theory of evolution?” Answer in one word: yes. The article is full of scandalous admissions: 


Strange as it sounds, scientists still do not know the answers to some of the most basic questions about how life on Earth evolved. Take eyes, for instance. Where do they come from, exactly? The usual explanation of how we got these stupendously complex organs rests upon the theory of natural selection….


This is the basic story of evolution, as recounted in countless textbooks and pop-science bestsellers. The problem, according to a growing number of scientists, is that it is absurdly crude and misleading.


For one thing, it starts midway through the story, taking for granted the existence of light-sensitive cells, lenses and irises, without explaining where they came from in the first place. Nor does it adequately explain how such delicate and easily disrupted components meshed together to form a single organ. And it isn’t just eyes that the traditional theory struggles with. “The first eye, the first wing, the first placenta. How they emerge. Explaining these is the foundational motivation of evolutionary biology,” says Armin Moczek, a biologist at Indiana University. “And yet, we still do not have a good answer. This classic idea of gradual change, one happy accident at a time, has so far fallen flat.”


There are certain core evolutionary principles that no scientist seriously questions. Everyone agrees that natural selection plays a role, as does mutation and random chance. But how exactly these processes interact — and whether other forces might also be at work — has become the subject of bitter dispute. “If we cannot explain things with the tools we have right now,” the Yale University biologist Günter Wagner told me, “we must find new ways of explaining.”…


[T]his is a battle of ideas over the fate of one of the grand theories that shaped the modern age. But it is also a struggle for professional recognition and status, about who gets to decide what is core and what is peripheral to the discipline. “The issue at stake,” says Arlin Stoltzfus, an evolutionary theorist at the IBBR research institute in Maryland, “is who is going to write the grand narrative of biology.” And underneath all this lurks another, deeper question: whether the idea of a grand story of biology is a fairytale we need to finally give up. [Emphasis added.]


“Absurdly crude and misleading”? A “classic idea” that “has so far fallen flat”? “A fairytale we need to finally give up”? Scientists locked in a desperate struggle for “professional recognition and status”? What about for the truth? This is how writers for Evolution News have characterized the troubles with Darwinian theory. But I didn’t expect to see it in The Guardian.


A Familiar Narrative

Buryani runs through a familiar narrative: the modern synthesis, the challenge from the Extended Evolutionary Synthesis, the 2016 “New Trends in Evolutionary Biology” meeting at the Royal Society (which was covered here extensively), how some evolutionists condemned the conference while other embraced its revisionist messaging, efforts to prop up unguided evolution with exotic ideas of “plasticity, evolutionary development, epigenetics, cultural evolution,” etc. 


If you’ve ever owned an automobile toward the end of its life, the situation will be familiar: the multiple problems all at once, the multiple attempted fixes, the expense, the trouble, the worry about the car breaking dying at any inconvenient or dangerous moment (like in the middle of the freeway), all of which together signal that it’s time not to sell the car (who would want it?) but to have it towed off and donated to charity for a tax credit.


Buryani doesn’t mention the intelligent design theorists in attendance at the Royal Society meeting — Stephen Meyer, Günter Bechly, Douglas Axes, Paul Nelson, and others. He doesn’t mention the challenge from intelligent design at all. That’s okay. I didn’t expect him to do so. Anyway, readers of Evolution News will already be familiar with most everything Buryani reports.


Despairing Statements

He concludes with seemingly despairing statements from evolutionists along the lines of, “Oh, we never needed a grand, coherent theory like that, after all.”


Over the past decade the influential biochemist Ford Doolittle has published essays rubbishing the idea that the life sciences need codification. “We don’t need no friggin’ new synthesis. We didn’t even really need the old synthesis,” he told me….


The computational biologist Eugene Koonin thinks people should get used to theories not fitting together. Unification is a mirage. “In my view there is no — can be no — single theory of evolution,” he told me.


I see. Evolutionists have, until now, been very, very reluctant to admit such things in the popular media. Always, the obligation was heeded to present an illusory picture of wedded bliss to the unwashed, which, if given some idea of the truth, would draw its own conclusions and maybe even take up with total heresies like intelligent design. Now that illusion of blessed domesticity has been cast aside in a most dramatic fashion. Read the rest of Buryani’s article. Your eyebrows will go up numerous times.


Who is this one Father?

 Have we not all one father? hath not one God created us? why do we deal treacherously every man against his brother, profaning the covenant of our fathers? 

Note please that this ONLY Father is also our ONLY God.

Thus failure to properly identify this one Father is grounds for disqualification from divine favor.

And this is life eternal, that they might know thee the ONLY TRUE GOD, and Jesus Christ, whom THOU hast sent.

Wednesday 29 June 2022

Nothing in biology is as complex as Darwinism's relationship with the truth?

 Jason Rosenhouse and Specified Complexity

William A. Dembski

I am reviewing Jason Rosenhouse’s new book, The Failures of Mathematical Anti-Evolutionism (Cambridge University Press), serially. For the full series so far, go here.


The method for inferring design laid out in my book The Design Inference amounts to determining whether an event, object, or structure exhibits specified complexity or, equivalently, specified improbability. The term specified complexity does not actually appear in The Design Inference, where the focus is on specified improbability. Specified improbability identifies things that are improbable but also suitably patterned, or specified. Specified complexity and specified improbability are the same notion. 


To see the connection between the two terms, imagine tossing a fair coin. If you toss it thirty times, you’ll witness an event of probability 1 in 2^30, or roughly 1 in a billion. At the same time, if you record those coin tosses as bits (0 for tails, 1 for heads), that will require 30 bits. The improbability of 1 in 2^30 thus corresponds precisely to the number of bits required to identify the event. The greater the improbability, the greater the complexity. Specification then refers to the right sort of pattern that, in the presence of improbability, eliminates chance. 


An Arrow Shot at a Target

Not all patterns eliminate chance in the presence of improbability. Take an arrow shot at a target. Let’s say the target has a bullseye. If the target is fixed and the arrow is shot at it, and if the bullseye is sufficiently small so that hitting it with the arrow is extremely improbable, then chance may rightly be eliminated as an explanation for the arrow hitting the bullseye. On the other hand, if the arrow is shot at a large wall, where the probability of hitting the wall is large, and the target is then painted around the arrow sticking in the wall so that the arrow is squarely in the bullseye, then no conclusion about whether the arrow was or was not shot by chance is possible. 


Specified improbability, or specified complexity, calls on a number of interrelated concepts. Besides a way of calculating or estimating probability and a criterion for determining whether a pattern is indeed a specification, the notion requires factoring in the number of relevant events that could occur, or what are called probabilistic resources. For example, multiple arrows allowing multiple shots will make it easier to hit the bullseye by chance. Moreover, the notion requires having a coherent rationale for determining what probability bounds may legitimately be counted as small enough to eliminate chance. Also, there’s the question of factoring in other specifications that may compete with the one originally identified, such as having two fixed targets on a wall and trying to determine whether chance could be ruled out if either of them were hit with an arrow. 


The basic theory for explaining how specified improbability/complexity is appropriately used to infer design was laid out in The Design Inference, and then refined (in some ways simplified, in some ways extended) over time. The notion was well vetted. It was the basis for my doctoral dissertation in the philosophy of science and the foundations of probability theory — this dissertation was turned into The Design Inference. I did this work in philosophy after I had already done a doctoral dissertation in mathematics focusing on probability and chaos theory (Leo Kadanoff and Patrick Billingsley were the advisors on that dissertation). 


The manuscript for The Design inference went past a stringent review with academic editors at Cambridge University Press, headed by Brian Skyrms, a philosopher of probability at UC Irvine, and one of the few philosophers to be in the National Academy of Sciences. When I was a postdoc at Notre Dame in 1996–97, the philosopher Phil Quinn revealed to me that he had been a reviewer, giving Cambridge an enthusiastic thumbs up. He also told me that he had especially liked The Design Inference’s treatment of complexity theory (chapter four in the book). 


But There’s More

With my colleagues Winston Ewert and Robert Marks, we’ve given specified complexity a rigorous formulation in terms of Kolmogorov complexity/algorithmic information theory:


Winston Ewert, William Dembski, and Robert J. Marks II (2014). “Algorithmic Specified Complexity.” In J. Bartlett, D. Hemser, J. Hall, eds., Engineering and the Ultimate: An Interdisciplinary Investigation of Order and Design in Nature and Craft (Broken Arrow, Okla.: Blyth Institute Press).

Ewert, W., Dembski, W., & Marks, R. J. (2015). “Algorithmic Specified Complexity in the Game of Life.” IEEE Transactions on Systems, Man, and Cybernetics: Systems, 45(4), 584–594.

True to form, critics of the concept refuse to acknowledge that specified complexity is a legitimate well-defined concept. Go to the Wikipedia entry on specified complexity, and you’ll find the notion dismissed as utterly bogus. Publications on specified complexity by colleagues and me, like those just listed, are ignored and left uncited. Rosenhouse is complicit in such efforts to discredit specified complexity. 


But consider, scientists must calculate, or at least estimate, probability all the time, and that’s true even of evolutionary biologists. For instance, John Maynard Smith, back in his 1958 The Theory of Evolution, concludes that flatworms, annelids, and molluscs, representing three different phyla, must nonetheless descend from a common ancestor because their common cleavage pattern in early development “seems unlikely to have arisen independently more than once.” (Smith, pp. 265–266) “Unlikely” is, of course, a synonym for “improbable.” 


Improbability by itself, however, is not enough. The events to which we assign probabilities need to be identified, and that means they must match identifiable patterns (in the Smith example, it’s the common cleavage pattern that he identified). Events exhibiting no identifiable pattern are events over which we can exercise no scientific insight and about which we can draw no scientific conclusion.


Hung Up on Specification

Even so, Rosenhouse seems especially hung up on my notion of specification, which he mistakenly defines as “independently describable” (p. 133) or “describable without any reference to the object itself” (p. 141). But nowhere does he give the actual definition of specification. To motivate our understanding of specification, I’ve used such language as “independently given” or “independently identifiable.” But these are intuitive ways of setting out the concept. Specification has a precise technical definition, of which Rosenhouse seems oblivious.


In The Design Inference, I characterized specification precisely in terms of a complexity measure that “estimates the difficulty of formulating patterns.” This measure then needs to work in tandem with a complexity bound that “fixes the level of complexity at which formulating such patterns is feasible.” (TDI, p. 144) That was in 1998. By 2005, this core idea stayed unchanged, but I preferred to use the language of descriptive complexity and minimum description length to characterize specification (see my 2005 article on Specification, published in Philosophia Christi, which Rosenhouse cites but without, again, giving the actual definition of the term specification). 


Two Notions of Complexity

So, what’s the upshot of specification according to this definition? Essentially, specified complexity or specified improbability involves two notions of complexity, one probabilistic, the other linguistic or descriptive. Thus we can speak of probabilistic complexity and descriptive complexity. Events become probabilistically more complex as they become more improbable (this is consistent with, as pointed out earlier, longer, more improbable sequences of coin tosses requiring longer bit strings to be recorded). At the same time, descriptive complexity characterizes patterns that describe events via a descriptive language. Descriptive complexity differs from probabilistic complexity and denotes the shortest description that will describe an event. The specification in specified complexity thus refers to patterns with short descriptions, and specified complexity refers to events that have high probabilistic complexity but whose identifying patterns have low descriptive complexity. 


To appreciate how probabilistic and descriptive complexity play off each other in specified complexity, consider the following example from poker. Take the hands corresponding to “royal flush” and “any hand.” These descriptions are roughly the same length and very short. Yet “royal flush” refers to 4 hands among 2,598,960 total number of poker hands and thus describes an event of probability 4/2,598,960 = 1/649,740. “Any hand,” by contrast, allows for any of the total number of 2,598,960 poker hands, and thus describes an event of probability 1. Clearly, if we witnessed a royal flush, we’d be inclined, on the basis of its short description and the low probability event to which it corresponds, to refuse to attribute it to chance. Now granted, with all the poker that’s played worldwide, the probability of 1/649,740 is not small enough to decisively rule out its chance occurrence (in the history of poker, royal flushes have appeared by chance). But certainly we’d be less inclined to ascribe a royal flush to chance than we would any hand at all.


The general principle illustrated in this example is that large probabilistic complexity (or low probability) and small descriptive complexity combine to yield specified complexity. Specifications are then those patterns that have small descriptive complexity. Note that it can be computationally intractable to calculate minimum description length exactly, but that often we can produce an effective estimate for it by finding a short description, which, by definition, will then constitute an upper bound for the absolute minimum. As it is, actual measures of specified complexity take the form of a negative logarithm applied to the product of a descriptive complexity measure times a probability. Because a negative logarithm makes small things big and big things small, high specified complexity corresponds to small probability multiplied with small descriptive complexity. This is how I find it easiest to keep straight how to measure specified complexity. 


Rosenhouse, however, gives no evidence of grasping specification or specified complexity in his book (pp. 137–146). For instance, he will reject that the flagellum is specified, claiming that it is not “describable without any reference to the object itself,” as though that were the definition of specification. (See also p. 161.) Ultimately, it’s not a question of independent describability, but of short or low-complexity describability. I happen to think that the description “bidirectional motor-driven propeller” is an independent way of describing the flagellum because humans invented bidirectional motor-driven propellers before they found them, in the form of flagella, on the backs of E. coli and other bacteria (if something has been independently identified, then it is independently identifiable). But what specifies it is that it has a short description, not that the description could or could not be identified independently of the flagellum. By contrast, a random assortment of the protein subunits that make up the flagellum would be much harder to describe. The random assortment would therefore require a much longer description, and would thus not be specified. 


The Science Literature

The mathematical, linguistic, and computer science literature is replete with complexity measures that use description length, although the specific terminology to characterize such measures varies with field of inquiry. For instance, the abbreviation MDL, or minimum description length, has wide currency; it arises in information theory and merits its own Wikipedia entry. Likewise AIT, or algorithmic information theory, has wide currency, where the focus is on compressibility of computer programs, so that highly compressible programs are the ones with shorter descriptions. In any case, specification and specified complexity are well defined mathematical notions. Moreover, the case for specified complexity strongly implicating design when probabilistic complexity is high and descriptive complexity is low is solid. I’m happy to dispute these ideas with anyone. But in such a dispute, it will have to be these actual ideas that are under dispute. Rosenhouse, by contrast, is unengaged with these actual ideas, attributing to me a design inferential apparatus that I do not recognize, and then offering a refutation of it that is misleading and irrelevant. 


As a practical matter, it’s worth noting that most Darwinian thinkers, when confronted with the claim that various biological systems exhibit specified complexity, don’t challenge that the systems in question (like the flagellum) are specified (Dawkins in The Blind Watchmaker, for instance, never challenges specification). In fact, they are typically happy to grant that these systems are specified. The reason they give for not feeling the force of specified complexity in triggering a design inference is that, as far as they’re concerned, the probabilities aren’t small enough. And that’s because natural selection is supposed to wash away any nagging improbabilities. 


A Coin-Tossing Analogy

In a companion essay to his book for Skeptical Inquirer, Rosenhouse offers the following coin-tossing analogy to illustrate the power of Darwinian processes in overcoming apparent improbabilities:


[Creationists argue that] genes and proteins evolve through a process analogous to tossing a coin multiple times. This is untrue because there is nothing analogous to natural selection when you are tossing coins. Natural selection is a non-random process, and this fundamentally affects the probability of evolving a particular gene. To see why, suppose we toss 100 coins in the hopes of obtaining 100 heads. One approach is to throw all 100 coins at once, repeatedly, until all 100 happen to land heads at the same time. Of course, this is exceedingly unlikely to occur. An alternative approach is to flip all 100 coins, leave the ones that landed heads as they are, and then toss again only those that landed tails. We continue in this manner until all 100 coins show heads, which, under this procedure, will happen before too long. 


The latter approach to coin tossing, which retosses only the coins that landed tails, corresponds, for Rosenhouse, to Darwinian natural selection making probable for evolution what at first blush might seem improbable. Of course, the real issue here is to form reliable estimates of what the actual probabilities are even when natural selection is thrown into the mix. The work of Mike Behe and Doug Axe argues that for some biological systems (such as molecular machines and individual enzymes), natural selection does nothing to mitigate what, without it, are vast improbabilities. Some improbabilities remain extreme despite natural selection. 


One final note before leaving specification and specified complexity. Rosenhouse suggests that in defining specified complexity as I did, I took a pre-theoretic notion as developed by origin-of-life researcher Leslie Orgel, Paul Davies, and others, and then “claim[ed] to have developed a mathematically rigorous form of the concept.” In other words, he suggests that I took a track 1 notion and claimed to turn it into a track 2 notion. Most of the time, Rosenhouse gives the impression that moving mathematical ideas from track 1 to track 2 is a good thing. But not in this case. Instead, Rosenhouse faults me for claiming that “this work constitutes a genuine contribution to science, and that [ID proponents] can use [this] work to prove that organisms are the result of intelligent design.” For Rosenhouse, “It is these claims that are problematic, to put it politely, for reasons we have already discussed.” (p. 161) 


The irony here is rich. Politeness aside, Rosenhouse’s critique of specified complexity is off the mark because he has mischaracterized its central concept, namely, specification. But what makes this passage particularly cringeworthy is that Leslie Orgel, Paul Davies, Francis Crick, and Richard Dawkins have all enthusiastically endorsed specified complexity, in one form or another, sometimes using the very term, at other times using the terms complexity and specification (or specificity) in the same breath. All of them have stressed the centrality of this concept for biology and, in particular, for understanding biological origins. 


Yet according to Rosenhouse, “These authors were all using ‘specified complexity’ in a track one sense. As a casual saying that living things are not just complex, but also embody independently-specifiable patterns, there is nothing wrong with the concept.” (p. 161) But in fact, there’s plenty wrong if this concept must forever remain merely at a pre-theoretic, or track 1, level. That’s because those who introduced the term “specified complexity” imply that the underlying concept can do a lot of heavy lifting in biology, getting at the heart of biological innovation and origins. So, if specified complexity stays forcibly confined to a pre-theoretic, or track 1, level, it becomes a stillborn concept — suggestive but ultimately fruitless. Yet given its apparent importance, the concept calls for a theoretic, or track 2, level of meaning and development. According to Rosenhouse, however, track 2 has no place for the concept. What a bizarre, unscientific attitude. 


Consider Davies from The Fifth Miracle (1999, p. 112): “Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity.” Or consider Richard Dawkins in The Blind Watchmaker (1986, pp. 15–16): “We were looking for a precise way to express what we mean when we refer to something as complicated. We were trying to put a finger on what it is that humans and moles and earthworms and airliners and watches have in common with each other, but not with blancmange, or Mont Blanc, or the moon. The answer we have arrived at is that complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by random chance alone.” How can any scientist who takes such remarks seriously be content to leave specified complexity at a track 1 level?


You’re Welcome, Rosenhouse

Frankly, Rosenhouse should thank me for taking specified complexity from a track 1 concept and putting it on solid footing as a track 2 concept, clarifying what was vague and fuzzy in the pronouncements of Orgel and others about specified complexity, thereby empowering specified complexity to become a precise tool for scientific inquiry. But I suspect in waiting for such thanks, I would be waiting for the occurrence of a very small probability event. And who in their right mind does that? Well, Darwinists for one. But I’m not a Darwinist.


Next, “Evolution With and Without Multiple Simultaneous Changes.’”


Editor’s note: This review is cross-posted with permission of the author from BillDembski.com.

Yet more on what 'unbelievers' need to believe.

 More on Self-Replicating Machines

Granville Sewell


In a post earlier this month, I outlined “Three Realities Chance Can’t Explain That Intelligent Design Can.” The post showed some of the problems with materialist explanations for how the four fundamental, unintelligent forces of physics alone could have rearranged the fundamental particles of physics on Earth into computers and science texts and smart phones. I drew a comparison to self-replicating machines:


[I]magine that we did somehow manage to design, say, a fleet of cars with fully automated car-building factories inside, able to produce new cars — and not just normal new cars, but new cars with fully automated car-building factories inside them. Who could seriously believe that if we left these cars alone for a long time, the accumulation of duplication errors made as they reproduced themselves would result in anything other than devolution, and eventually could even be organized by selective forces into more advanced automobile models?


A More Careful Look

But I don’t think this makes sufficiently clear what a difficult task it would be to create truly self-replicating cars. So let’s look at this more carefully. We know how to build a simple Ford Model T car. Now let’s build a factory inside this car, so that it can produce Model T cars automatically. We’ll call the new car, with the Model T factory inside, a “Model U.” A car with an entire automobile factory inside, which never requires any human intervention, is far beyond our current technology, but it doesn’t seem impossible that future generations might be able to build a Model U. 


Of course, the Model U cars are not self-replicators, because they can only construct simple Model T’s. So let’s add more technology to this car so that it can build Model U’s, that is, Model T’s with car-building factories inside. This new “Model V” car, with a fully automated factory inside capable of producing Model U’s (which are themselves far beyond our current technology), would be unthinkably complex. But is this new Model V now a self-replicator? No, because it only builds the much simpler Model U. The Model V species will become extinct after two generations, because their children will be Model U’s, and their grandchildren will be infertile Model T’s! 


So Back to Work 

Each time we add technology to this car, to move it closer to the goal of reproduction, we only move the goalposts, because now we have a more complicated car to reproduce. It seems that the new models would grow exponentially in complexity, and one begins to wonder if it is even theoretically possible to create self-replicating machines. Yet we see such machines all around us in the living world. You and I are two examples. And here we have ignored the very difficult question of where these cars get the metals and rubber and other raw materials they need to supply their factories.


Of course, materialists will say that evolution didn’t create advanced self-replicating machines directly. Instead, it only took a first simple self-replicator and gradually evolved it into more and more advanced self-replicators. But beside the fact that human engineers still have no idea how to create any “simple” self-replicating machine, the point is, evolutionists are attributing to natural causes the ability to create things much more advanced than self-replicating cars (for example, self-replicating humans), which seem impossible, or virtually impossible, to design. I conceded in my earlier post (and in my video “A Summary of the Evidence for Intelligent Design”) that human engineers might someday construct a self-replicating machine. But even if they do, that will not show that life could have arisen through natural processes. It will only have shown that it could have arisen through design. 


Design by Duplication Errors

Anyway, as I wrote there, even if we could create self-replicating cars, who could seriously believe that the duplication errors made as they reproduced themselves could ever lead to major advances? (And even intelligent, conscious machines eventually.) Surely an unimaginably complex machine like a self-replicating car could only be damaged by such errors, even when filtered through natural selection. We are so used to seeing animals and plants reproduce themselves with minimal degradation from generation to generation that we don’t realize how astonishing this really is. We really have no idea how living things are able to pass their current complex structures on to their descendants, much less how they could evolve even more complex structures.


When mathematicians have a simple, clear proof of a theorem, and a long, complicated counterargument, full of unproven assumptions and questionable arguments, we accept the simple proof, even before we find the errors in the complicated counterargument. The argument for intelligent design could not be simpler or clearer: unintelligent forces alone cannot rearrange atoms into computers and airplanes and nuclear power plants and smart phones, and any attempt to explain how they can must fail somewhere because they obviously can’t. Since many scientists are not impressed by such simple arguments, my post was an attempt to point out some of the errors in the materialist’s three-step explanation for how they could. And to say that all three steps are full of unproven assumptions and questionable arguments is quite an understatement. 


At the least, it should now be clear that while science may be able to explain everything that has happened on other planets by appealing only to the unintelligent forces of nature, trying to explain the origin and evolution of life on Earth is a much more difficult problem, and intelligent design should at least be counted among the views that are allowed to be heard. Indeed, this is already starting to happen. 

Yet another strawman bully?

 Jason Rosenhouse and “Mathematical Proof”

William A. Dembski


I am reviewing Jason Rosenhouse’s new book, The Failures of Mathematical Anti-Evolutionism (Cambridge University Press), serially. For the full series so far, go here.


A common rhetorical ploy is to overstate an opponent’s position so much that it becomes untenable and even ridiculous. Jason Rosenhouse deploys this tactic repeatedly throughout his book. Design theorists, for instance, argue that there’s good evidence to think that the bacterial flagellum is designed, and they see mathematics as relevant to making such an evidential case. Yet with reference to the flagellum, Rosenhouse writes, “Anti-evolutionists make bold, sweeping claims that some complex system [here, the flagellum] could not have arisen through evolution. They tell the world they have conclusive mathematical proof of this.” (p. 152) I am among those who have made a mathematical argument for the design of the flagellum. And so, Rosenhouse levels that charge specifically against me: “Dembski claims his methods allow him to prove mathematically that evolution has been refuted …” (p. 136)


Rosenhouse, as a mathematician, must at some level realize that he’s prevaricating. It’s one thing to use mathematics in an argument. It’s quite another to say that one is offering a mathematical proof. The latter is much, much stronger than the former, and Rosenhouse knows the difference. I’ve never said that I’m offering a mathematical proof that systems like the flagellum are designed. Mathematical proofs leave no room for fallibility or error. Intelligent design arguments use mathematics, but like all empirical arguments they fall short of the deductive certainty of mathematical proof. I can prove mathematically that 6 is a composite number by pointing to 2 and 3 as factors. I can prove mathematically that 7 is a prime number by running through all the numbers greater than 1 and less than 7, showing that none of them divide it. But no mathematical proof that the flagellum is designed exists, and no design theorist that I know has ever suggested otherwise.


Rosenhouse’s Agenda

So, how did Rosenhouse arrive at the conclusion that I’m offering a mathematical proof of the flagellum’s design? I suspect the problem is Rosenhouse’s agenda, which is to discredit my work on intelligent design irrespective of its merit. Rosenhouse has no incentive to read my work carefully or to portray it accurately. For instance, he seizes on a probabilistic argument that I make for the flagellum’s design in my 2002 book No Free Lunch, characterizing it as a mathematical proof, and a failed one at that. But he has no possible justification for calling what I do there a mathematical proof. Note how I wrap up that argument — the very language used is as far from a mathematical proof as one can find (and I’ve proved my share of mathematical theorems, so I know):


Although it may seem as though I have cooked these numbers, in fact I have tried to be conservative with all my estimates. To be sure, there is plenty of biological work here to be done. The big challenge is to firm up these numbers and make sure they do not cheat in anybody’s favor. Getting solid, well-confirmed estimates for perturbation tolerance and perturbation identity factors [used to estimate probabilities gauging evolvability] will require careful scientific investigation. Such estimates, however, are not intractable. Perturbation tolerance factors can be assessed empirically by random substitution experiments where one, two, or a few substitutions are made. 


NO FREE LUNCH, PP. 301–302

Obviously, I’ve used mathematics here to make an argument. But equally obviously, I’m not claiming to have provided a mathematical proof. In the section where this quote appears, I’m laying out various mathematical and probabilistic techniques that can be used to make an evidential case for the flagellum’s design. It’s not a mathematical proof but an evidential argument, and not even a full-fledged evidential argument so much as a template for such an argument. In other words, I’m laying out what such an argument would look like if one filled in the biological and probabilistic details. 


All or Nothing

As such, the argument falls short of deductive certainty. Mathematical proof is all or nothing. Evidential support comes in degrees. The point of evidential arguments is to increase the degree of support for a claim, in this case for the claim that the flagellum is intelligently designed. A dispassionate reader would regard my conclusion here as measured and modest. Rosenhouse’s refutation, by contrast, is to set up a strawman, so overstating the argument that it can’t have any merit.


The reference to perturbation tolerance and perturbation identity factors here refers to the types of neighborhoods that are relevant to evolutionary pathways. Such neighborhoods and pathways were the subject of the two previous posts in this review series. These perturbation factors are probabilistic tools for investigating the evolvability of systems like the flagellum. They presuppose some technical sophistication, but their point is to try honestly to come to terms with the probabilities that are actually involved with real biological systems. 


At this point, Rosenhouse might feign shock, suggesting that I give the impression of presenting a bulletproof argument for the design of the flagellum, but that I’m now backpedaling, only to admit that the probabilistic evidence for the design of the flagellum is tentative. But here’s what’s actually happening. Mike Behe, in defining irreducible complexity, has identified a class of biological systems (those that are irreducibly complex) that resist Darwinian explanations and that implicate design. At the same time, there’s also this method for inferring design developed by Dembski. What happens if that method is applied to irreducibly complex systems? Can it infer design for such systems? That’s the question I’m trying to answer, and specifically for the flagellum.


Begging the Question?

Since the design inference, as a method, infers design by identifying what’s called specified complexity (more on this is coming up), Rosenhouse claims that my argument begs the question. Thus, I’m supposed to be presupposing that irreducible complexity makes it impossible for a system to evolve by Darwinian means. And from there I’m supposed to conclude that it must be highly improbable that it could evolve by Darwinian means (if it’s impossible, then it’s improbable). But that’s not what I’m doing. Instead, I’m using irreducible complexity as a signpost of where to look for biological improbability. Specifically, I’m using particular features of an irreducibly complex system like the bacterial flagellum to estimate probabilities related to its evolvability. I conclude, in the case of the flagellum, that those probabilities seem low and warrant a design inference. 


Now I might be wrong (that’s why I say the numbers need to be firmed up and we need to make sure no one is cheating). To this day, I’m not totally happy with the actual numbers in the probability calculation for the bacterial flagellum as presented in my book No Free Lunch. But that’s no reason for Rosenhouse and his fellow Darwinists to celebrate. The fact is that they have no probability estimates at all for the evolution of these systems. Worse yet, because they are so convinced that these systems evolved by Darwinian means, they know in advance, simply from their armchairs, that the probabilities must be high. The point of that section in No Free Lunch was less to do a definitive calculation for the flagellum as to lay out the techniques for calculating probabilities in such cases (such as the perturbation probabilities). 


In his book, Rosenhouse claims that I have “only once tried to apply [my] method to an actual biological system” (p. 137), that being to the flagellum in No Free Lunch. And, obviously, he thinks I failed in that regard. But as it is, I have applied the method elsewhere, and with more convincing numbers. See, for instance, my analysis of Doug Axe’s investigation into the evolvability of enzyme folds in my 2008 book The Design of Life (co-authored with Jonathan Wells; see chapter seven). My design inferential method yields much firmer conclusions there than for the flagellum for two reasons: (1) the numbers come from the biology as calculated by biologists (in this case, the biologist is Axe), and (2) the systems in question (small enzymatic proteins with 150 or so amino acids) are much easier to analyze than big molecular machines like the flagellum, which have tens of thousands of protein subunits. 


Hiding Behind Complexities

Darwinists have always hidden behind the complexities of biological systems. Instead of coming to terms with the complexities, they turn the tables and say: “Prove us wrong and show that these systems didn’t evolve by Darwinian means.” As always, they assume no burden of proof. Given the slipperiness of the Darwinian mechanism, in which all interesting evolution happens by co-option and coevolution, where structures and functions must both change in concert and crucial evolutionary intermediates never quite get explicitly identified, Darwinists have essentially insulated their theory from challenge. So the trick for design theorists looking to apply the design inferential method to actual biological systems is to find a Goldilocks zone in which a system is complex enough to yield design if the probabilities can be calculated and yet simple enough for the probabilities actually to be calculated. Doug Axe’s work is, in my view, the best in this respect. We’ll return to it since Axe also comes in for criticism from Rosenhouse.


Next, “Jason Rosenhouse and Specified Complexity.”


Editor’s note: This review is cross-posted with permission of the author from BillDembski.com.

Peace: JEHOVAH'S gift to his people.

Malachi3:18NIV"And you will again see the distinction between the righteous and the wicked, between those who serve God and those who do not."

1John3:10NIV"This is how we know who the children of God are and who the children of the devil are: Anyone who does not do what is right is not God’s child, nor is anyone who does not love their brother and sister."

Micah4:1-3ASV"But in the latter days it shall come to pass, that the mountain of Jehovah's house shall be established on the top of the mountains, and it shall be exalted above the hills; and peoples shall flow unto it.


2And many nations shall go and say, Come ye, and let us go up to the mountain of Jehovah, and to the house of the God of Jacob; and he will teach us of his ways, and we will walk in his paths. For out of Zion shall go forth the law, and the word of Jehovah from Jerusalem;


3and he will judge between many peoples, and will decide concerning strong nations afar off: and they shall beat their swords into plowshares, and their spears into pruning-hooks; nation shall not lift up sword against nation, neither shall they learn war any more."

Peace is the metric by which a distinction is to be made not merely between the individual who has truly dedicated himself to JEHOVAH'S service and the one whose profession of such a dedication is questionable. But the people who are truly in a covenant relationship with the one true God and the churches whose profession of such a relationship cannot withstand unbiased scrutiny. 

Wednesday 22 June 2022

Darwinism's deafening silence on a plausible path to new organs.

 The Silence of the Evolutionary Biologists

William A. Dembski

I am reviewing Jason Rosenhouse’s new book, The Failures of Mathematical Anti-Evolutionism (Cambridge University Press), serially. For the full series so far, go here.


The Darwinian community has been strikingly unsuccessful in showing how complex biological adaptations evolved, or even how they might have evolved, in terms of detailed step-by-step pathways between different structures performing different functions (pathways that must exist if Darwinian evolution holds). Jason Rosenhouse admits the problem when he says that Darwinians lack “direct evidence” of evolution and must instead depend on “circumstantial evidence.” (pp. 47–48) He elaborates: “As compelling as the circumstantial evidence for evolution is, it would be better to have direct experimental confirmation. Sadly, that is impossible. We have only the one run of evolution on this planet to study, and most of the really cool stuff happened long ago.” (p. 208) How very convenient. 


Design theorists see the lack of direct evidence for Darwinian processes creating all that “cool stuff” — in the ancient past no less — as a problem for Darwinism. Moreover, they are unimpressed with the circumstantial evidence that convinces Darwinists that Darwin got it right. Rosenhouse, for instance, smugly informs his readers that “eye evolution is no longer considered to be especially mysterious.” (p. 54) It’s not that the human eye and the visual cortex with which it is integrated are even remotely well enough understood to underwrite a realistic model of how the human eye might have evolved. The details of eye evolution, if such details even exist, remain utterly mysterious.


A Crude Similarity Metric

Instead, Rosenhouse does the only thing that Darwinists can do when confronted with the eye: point out that eyes of many different complexities exist in nature, relate them according to some crude similarity metric (whether structurally or genetically), and then simply posit that gradual step-by-step evolutionary paths connecting them exist (perhaps by drawing arrows to connect similar eyes). Sure, Darwinists can produce endearing computer models of eye evolution (what two virtual objects can’t be made to evolve into each other on a computer?). And they can look for homologous genes and proteins among differing eyes (big surprise that similar structures may use similar proteins). But eyes have to be built in embryological development, and eyes evolving by Darwinian means need a step-by-step path to get from one to the other. No such details are ever forthcoming. Credulity is the sin of Darwinists.


Intelligent design’s scientific program can thus, at least in part, be viewed as an attempt to unmask Darwinist credulity. The task, accordingly, is to find complex biological systems that convincingly resist a gradual step-by-step evolution. Alternatively, it is to find systems that strongly implicate evolutionary discontinuity with respect to the Darwinian mechanism because their evolution can be seen to require multiple coordinated mutations that cannot be reduced to small mutational steps. Michael Behe’s irreducibly complex molecular machines, such as the bacterial flagellum, described in his 1996 book Darwin’s Black Box, provided a rich set of examples for such evolutionary discontinuity. By definition, a system is irreducibly complex if it has core components for which the removal of any of them causes it to lose its original function.


No Plausible Pathways

Interestingly, in the two and a half decades since Behe published that book, no convincing, or even plausible, detailed Darwinian pathways have been put forward to explain the evolution of these irreducibly complex systems. The silence of evolutionary biologists in laying out such pathways is complete. Which is not to say that they are silent on this topic. Darwinian biologists continue to proclaim that irreducibly complex biochemical systems like the bacterial flagellum have evolved and that intelligent design is wrong to regard them as designed. But such talk lacks scientific substance.


Next, “From Darwinists, a Shift in Tone on Nanomachines.”


Editor’s note: This review is cross-posted with permission of the author from BillDembski.com.

For Darwinism humor is no laughing matter.

 There’s Nothing Funny About Evolution

Geoffrey Simmons


Much like the genetic blueprints given to each of us at conception, blueprints for pumping blood, exchanging carbon dioxide for oxygen, digesting food, eliminating food, and retaining memories, we come with a built-in sense of humor. Could our sense of humor have evolved, meaning come about by millions of tiny, modifying, successive steps over millions of years? Or, did it arrive in one lump sum, by design? There are good reasons to suspect the latter.  But first some background musings.


For one thing, genetic studies suggest those folks with a better sense of humor have a shorter allele of gene 5-HTTLPR. In addition, we know there are many physiological benefits to laughter. Oxygenation is increased, cardiac function is improved, stress hormones, such as cortisol and adrenaline, are reduced, the immune system is charged up, and the dopaminergic system, which fights depression, is strengthened.


Norman Cousins, a past Adjunct Professor at UCLA, in his book Anatomy of an Illness as Perceived by the Patient, and in an article in The New England Journal of Medicine, wrote about how he lowered his pain levels from ankylosing spondylitis, from a 10 to a 2. Ten minutes of laughter gave him two hours of pain-free sleep. Much of this laughter came from watching TV. Nowadays, if one is over 13 years old, one might need to find a different medium.


We’re told that laughing 100 times is equal to 10 minutes on a rowing machine or 15 minutes on an exercise bike. Perhaps one could frequent a comedy club nightly and skip those painful, daily exercises. Humor helps us when times are stressful, when we’re courting, and when we’re depressed. Students enjoy their teachers, pay more attention, and remember more information when humor is added to classroom instruction. Humor promotes better bonding between student and teacher, and between most couples. It also helps with hostage negotiations.


A Darwinian Scenario

If our sense of humor came about by tiny steps, like other functions, as proposed by Charles Darwin, scientists have yet to find proof of it. Think of it: can hearing the beginning words of a joke even be funny? Is there any benefit to survival with one-word jokes that eventually become two- and three-word jokes? I, doubt it, but that’s just my personal opinion. 


Fish talk by means of gestures, electrical impulses, bioluminescence, and sounds like hard-to-hear purrs, croaks, and pops. But, did they (or could they) bring their jokes ashore millions of years ago? Of course, there’s no evidence of that. Yet? Just maybe one might envision the fish remaining in the water teasing the more adventuresome fish about their ooohs and aahs, issued while walking across burning-hot sands. 


Tickling a Rat

Laughing while being tickled is not the same as having a sense of humor. The response to someone reaching into one’s armpit is a neurological and physiological reaction to being touched. For some, tickling is torture. I had one rather serious female patient, who, when undressed and covered with a sheet, was ticklish from her neck to her toes. She was nearly impossible to examine. Sometimes she would start laughing as I approached her.


One can tickle a rat, and given the right equipment, record odd utterances that might be laughter. But it might easily be profanity. Some say one can tickle a sting ray, but others say the animal is suffocating. Attempts to tickle a crocodile and other wild animals have not been conducted, as far as I’m aware, in any depth. Also, such attempts are not recommended.


Laughing is clearly part of the human package, part of our design. As I see it, there can only be two possible origins. Humor evolved very, very slowly, or it came about more quickly by intelligent design. Negative feedback loops might argue against the slow development. Some fringe thinkers might speculate that extraterrestrials passed on their sense of humor to us, millions of years ago, but, if so, jokes about the folks in the Andromeda galaxy are on a different wavelength. Jokes about Uranus, of course, are local.


Sorry About that Last One, Folks

A sense of humor varies from person to person, much like height, weight, and abdominal girth. Plus, there are gender differences. Women like men who make them laugh; men like women who laugh at their jokes. Comedians say a sense of humor is a mating signal indicating high intelligence. People on Internet dating sites often ask each other about their sense of humor. Of course, we all have great senses of humor. Just ask anyone.


A sense of humor is often highly valued. Couples get along better when they have similar senses of humor. Mutation is more likely to ruin a good joke than help it. A serious mutation might take out the entire punchline. Jokes about a partner’s looks or clothes are to be avoided. They might lead to domestic abuse. Happy tears are chemically different from sad tears. Both are different from the tears that cleanse the eye with each blink or react to infections. Can anyone explain that? Could specific tears have come about by accident?


We know laughing is a normal human activity. Some days are better than others. Human babies often smile and giggle before they are two months old, years before they will understand a good riddle. Deaf and blind babies smile and giggle at virtually that same age. Is that present to make them more lovable? Children laugh up to 400 times a day, adults only 15 times per day. This could mean we need to hear many more jokes on a daily basis.


What Humor Means

 We all think we know what humor means, but because it can vary among people, we really don’t. An amusing joke told man-to-man might be a nasty joke if told man-to-woman. Or, the other way around. Humor tends to be intangible. It’s somewhat like certain foods tasting good to you, but maybe not to me. Too salty versus needs more salt? Or sweetener? I once told my medical partner that my wife and I had just seen the funniest movie we had ever seen. He and his wife went out that very night to see it and didn’t find anything in it funny. Nothing at all! Not even the funniest scene I have ever seen in a movie. Go figure. 


What does having a good sense of humor mean? Might it be reciting a lot of relevant jokes from a repository, making up funny quips during conversations, or laughing a lot at most anything except someone else’s pain? Or a mix?


There’s a laughter-like sound that is made by chimps, bonobos, and gorillas while playing. But does it mean there’s a sense of humor at work, or monkey profanity? They might be calling each other bad names. Octopuses play but don’t smile orlaugh, we think. Dolphins “giggle” using different combinations of whistles and clicks. It does seem like they are laughing at times, but nobody knows for sure. Maybe it’s just a case of anthropomorphizing. The dolphin family has been around approximately 11 million years and the area of their brain that processes language is much larger than ours. They’ve had plenty of time to come up with several good ones.


Koko the Humorous Gorilla

Perhaps, the most interesting case was Koko the gorilla who was taught to sign. She recently died after 46 years. Her vocabulary was at least 1,000 words by signing and another 2,000 words by hearing. Some say she was a jokester. She loved Robin Williams. Maybe adored him. The two would play together for hours. Koko seemed to make up jokes. She once tore the sink out of the wall in her cage; when asked about it, she signed that her pet cat did it. However, the cat wasn’t tall enough.


 So I ask again, could a sense of humor have come about by numerous, successive, slight modifications, a Darwinian requirement? If humor fails that test, might humor be the elusive coup de grace for naturalism? Since irreducible complexity, specified complexity, and topoisomerases haven’t landed the KO to Darwin’s weakening theories, might the answer just be as simple as laughing at them?


If a sense of humor were just a variation on tickling, my guess is that comedians would come off the stage or hire teenagers to walk among their audiences to tickle everyone. Imagine being dressed up for the night, maybe eating a fancy meal or drinking expensive champagne, and some grubby kid, who’s paid minimum wage, is reaching into your armpits.


Why Laugh at All? 

Is a sense of humor a byproduct, an accident, or was it installed on purpose? For better health? There definitely seems to be a purpose. Could it be a coping mechanism? Is it the way to meet the right mate? Surely, that must be part of it.


The only evolution-related quip I could think of sums up this discussion rather well:


A little girl asked her mother, “How did the human race come about?”


The mother answered, “God made Adam and Eve. They had children, and so all mankind was made.”


A few days later, the little girl asked her father the same question. The father answered, “Many years ago there were apelike creatures, and we developed from them.”


The confused girl returned to her mother and said, “Mom, how is it possible that you told me that the human race was created by God , and Papa says we developed from ‘apelike creatures’?”


The mother answered, “Well, dear, it is very simple. I told you about the origin of my side of the family, and your father told you about his.”

Man does not compute?

 The Non-Computable Human

Robert J. Marks II


Editor’s note: We are delighted to present an excerpt from Chapter 1 of the new book Non-Computable You: What You Do that Artificial Intelligence Never Will, by computer engineer Robert J. Marks, director of Discovery Institute’s Bradley Center for Natural and Artificial Intelligence.


If you memorized all of Wikipedia, would you be more intelligent? It depends on how you define intelligence. 


Consider John Jay Osborn Jr.’s 1971 novel The Paper Chase. In this semi-autobiographical story about Harvard Law School, students are deathly afraid of Professor Kingsfield’s course on contract law. Kingfield’s classroom presence elicits both awe and fear. He is the all-knowing professor with the power to make or break every student. He is demanding, uncompromising, and scary smart. In the iconic film adaptation, Kingsfield walks into the room on the first day of class, puts his notes down, turns toward his students, and looms threateningly.


“You come in here with a skull full of mush,” he says. “You leave thinking like a lawyer.” Kingsfield is promising to teach his students to be intelligent like he is. 


One of the law students in Kingsfield’s class, Kevin Brooks, is gifted with a photographic memory. He can read complicated case law and, after one reading, recite it word for word. Quite an asset, right?


Not necessarily. Brooks has a host of facts at his fingertips, but he doesn’t have the analytic skills to use those facts in any meaningful way.


Kevin Brooks’s wife is supportive of his efforts at school, and so are his classmates. But this doesn’t help. A tutor doesn’t help. Although he tries, Brooks simply does not have what it takes to put his phenomenal memorization skills to effective use in Kingsfield’s class. Brooks holds in his hands a million facts that because of his lack of understanding are essentially useless. He flounders in his academic endeavor. He becomes despondent. Eventually he attempts suicide. 


Knowledge and Intelligence

This sad tale highlights the difference between knowledge and intelligence. Kevin Brooks’s brain stored every jot and tittle of every legal case assigned by Kingsfield, but he couldn’t apply the information meaningfully. Memorization of a lot of knowledge did not make Brooks intelligent in the way that Kingsfield and the successful students were intelligent. British journalist Miles Kington captured this distinction when he said, “Knowing a tomato is a fruit is knowledge. Intelligence is knowing not to include it in a fruit salad.”


Which brings us to the point: When discussing artificial intelligence, it’s crucial to define intelligence. Like Kevin Brooks, computers can store oceans of facts and correlations; but intelligence requires more than facts. True intelligence requires a host of analytic skills. It requires understanding; the ability to recognize humor, subtleties of meaning, and symbolism; and the ability to recognize and disentangle ambiguities. It requires creativity.


Artificial intelligence has done many remarkable things. AI has largely replaced travel agents, tollbooth attendants, and mapmakers. But will AI ever replace attorneys, physicians, military strategists, and design engineers, among others?


The answer is no. And the reason is that as impressive as artificial intelligence is — and make no mistake, it is fantastically impressive — it doesn’t hold a candle to human intelligence. It doesn’t hold a candle to you.


And it never will. How do we know? The answer can be stated in a single four-syllable word that needs unpacking before we can contemplate the non-computable you. That word is algorithm. If not expressible as an algorithm, a task is not computable.


Algorithms and the Computable

An algorithm is a step-by-step set of instructions to accomplish a task. A recipe for German chocolate cake is an algorithm. The list of ingredients acts as the input for the algorithm; mixing the ingredients and following the baking and icing instructions will result in a cake.


Likewise, when I give instructions to get to my house, I am offering an algorithm to follow. You are told how far to go and which direction you are to turn on what street. When Google Maps returns a route to go to your destination, it is giving you an algorithm to follow. 


Humans are used to thinking in terms of algorithms. We make grocery lists, we go through the morning procedure of showering, hair combing, teeth brushing, and we keep a schedule of what to do today. Routine is algorithmic. Engineers algorithmically apply Newton’s laws of physics when designing highway bridges and airplanes. Construction plans captured on blueprints are part of an algorithm for building. Likewise, chemical reactions follow algorithms discovered by chemists. And all mathematical proofs are algorithmic; they follow step-by-step procedures built on the foundations of logic and axiomatic presuppositions. 


Algorithms need not be fixed; they can contain stochastic elements, such as descriptions of random events in population genetics and weather forecasting. The board game Monopoly, for example, follows a fixed set of rules, but the game unfolds through random dice throws and player decisions.


Here’s the key: Computers only do what they’re programmed by humans to do, and those programs are all algorithms — step-by-step procedures contributing to the performance of some task. But algorithms are limited in what they can do. That means computers, limited to following algorithmic software, are limited in what they can do.


This limitation is captured by the very word “computer.” In the world of programmers, “algorithmic” and “computable” are often used interchangeably. And since “algorithmic” and “computable” are synonyms, so are “non-computable” and “non-algorithmic.”


Basically, for computers — for artificial intelligence — there’s no other game in town. All computer programs are algorithms; anything non-algorithmic is non-computable and beyond the reach of AI.


But it’s not beyond you. 


Non-Computable You

Humans can behave and respond non-algorithmically. You do so every day. For example, you perform a non-algorithmic task when you bite into a lemon. The lemon juice squirts on your tongue and you wince at the sour flavor. 


Now, consider this: Can you fully convey your experience to a man who was born with no sense of taste or smell? No. You cannot. The goal is not a description of the lemon-biting experience, but its duplication. The lemon’s chemicals and the mechanics of the bite can be described to the man, but the true experience of the lemon taste and aroma cannot be conveyed to someone without the necessary senses.


If biting into a lemon cannot be explained to a man without all his functioning senses, it certainly can’t be duplicated in an experiential way by AI using computer software. Like the man born with no sense of taste or smell, machines do not possess qualia — experientially sensory perceptions such as pain, taste, and smell. 


Qualia are a simple example of the many human attributes that escape algorithmic description. If you can’t formulate an algorithm explaining your lemon-biting experience, you can’t write software to duplicate the experience in the computer.


Or consider another example. I broke my wrist a few years ago, and the physician in the emergency room had to set the broken bones. I’d heard beforehand that bone-setting really hurts. But hearing about pain and experiencing pain are quite different. 


To set my broken wrist, the emergency physician grabbed my hand and arm, pulled, and there was an audible crunching sound as the bones around my wrist realigned. It hurt. A lot. I envied my preteen grandson, who had been anesthetized when his broken leg was set. He slept through his pain.


Is it possible to write a computer program to duplicate — not describe, but duplicate — my pain? No. Qualia are not computable. They’re non-algorithmic.


By definition and in practice, computers function using algorithms. Logically speaking, then, the existence of the non-algorithmic suggests there are limits to what computers and therefore AI can do.

Darwinists attempt to correct God again.

 From Darwinists, a Shift in Tone on Nanomachines

William A. Dembski


I am reviewing Jason Rosenhouse’s new book, The Failures of Mathematical Anti-Evolutionism (Cambridge University Press), serially. For the full series so far, go here.


Unfortunately for Darwinists, irreducible complexity raises real doubts about Darwinism in people’s minds. Something must be done. Rising to the challenge, Darwinists are doing what must be done to control the damage. Take the bacterial flagellum, the poster child of irreducibly complex biochemical machines. Whatever biologists may have thought of its ultimate origins, they tended to regard it with awe. Harvard’s Howard Berg, who discovered that flagellar filaments rotate to propel bacteria through their watery environments, would in public lectures refer to the flagellum as “the most efficient machine in the universe.” (And yes, I realize there are many different bacteria sporting many different variants of the flagellum, including the souped-up hyperdrive magnetotactic bacteria, which swim ten times faster than E. coli — E. coli’s flagellum, however, seems to be the one most studied.)

Why “Machines”?

In 1998, writing for a special issue of Cell, the National Academy of Sciences president at the time, Bruce Alberts, remarked:


We have always underestimated cells… The entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of a set of large protein machines… Why do we call the large protein assemblies that underlie cell function protein machines? Precisely because, like machines invented by humans to deal efficiently with the macroscopic world, these protein assemblies contain highly coordinated moving parts. [Emphasis in the original.]


A few years later, in 2003, Adam Watkins, introducing a special issue on nanomachines for BioEssays, wrote: 


The articles included in this issue demonstrate some striking parallels between artifactual and biological/molecular machines. In the first place, molecular machines, like man-made machines, perform highly specific functions. Second, the macromolecular machine complexes feature multiple parts that interact in distinct and precise ways, with defined inputs and outputs. Third, many of these machines have parts that can be used in other molecular machines (at least, with slight modification), comparable to the interchangeable parts of artificial machines. Finally, and not least, they have the cardinal attribute of machines: they all convert energy into some form of ‘work’.


Neither of these special issues offered detailed step-by-step Darwinian pathways for how these machine-like biological systems might have evolved, but they did talk up their design characteristics. I belabor these systems and the special treatment they received in these journals because none of the mystery surrounding their origin has in the intervening years been dispelled. Nonetheless, the admiration that they used to inspire has diminished. Consider the following quote about the flagellum from Beeby et al.’s 2020 article on propulsive nanomachines. Rosenhouse cites it approvingly, prefacing the quote by claiming that the flagellum is “not the handiwork of a master engineer, but is more like a cobbled-together mess of kludges” (pp. 151–152):


Many functions of the three propulsive nanomachines are precarious, over-engineered contraptions, such as the flagellar switch to filament assembly when the hook reaches a pre-determined length, requiring secretion of proteins that inhibit transcription of filament components. Other examples of absurd complexity include crude attachment of part of an ancestral ATPase for secretion gate maturation, and the assembly of flagellar filaments at their distal end. All cases are absurd, and yet it is challenging to (intelligently) imagine another solution given the tools (proteins) to hand. Indeed, absurd (or irrational) design appears a hallmark of the evolutionary process of co-option and exaptation that drove evolution of the three propulsive nanomachines, where successive steps into the adjacent possible function space cannot anticipate the subsequent adaptations and exaptations that would then become possible. 


The shift in tone from then to now is remarkable. What happened to the awe these systems used to inspire? Have investigators really learned so much in the intervening years to say, with any confidence, that these systems are indeed over-engineered? To say that something is over-engineered is to say that it could be simplified without loss of function (like a Rube Goldberg device). And what justifies that claim here? Have scientists invented simpler systems that in all potential environments perform as well as or better than the systems in question? Are they able to go into existing flagellar systems, for instance, and swap out the over-engineered parts with these more efficient (sub)systems? Have they in the intervening years gained any real insight into the step-by-step evolution of these systems? Or are they merely engaged in rhetoric to make flagellar motors seem less impressive and thus less plausibly the product of design? To pose these questions is to answer them.


A Quasi-Humean Spirit

Rosenhouse even offers a quasi-Humean anti-design argument. Humans are able to build things like automobiles, but not things like organisms. Accordingly, ascribing design to organisms is an “extravagant extrapolation” from “causes now in operation.” Rosenhouse’s punchline: “Based on our experience, or on comparisons of human engineering to the natural world, the obvious conclusion is that intelligence cannot at all do what they [i.e., ID proponents] claim it can do. Not even close. Their argument is no better than saying that since moles are seen to make molehills, mountains must be evidence for giant moles.” (p. 273) 


Seriously?! As Richard Dawkins has been wont to say, “This is a transparently feeble argument.” So, primitive humans living with stone-age technology, if they were suddenly transported to Dubai, would be unable to get up to speed and recognize design in the technologies on display there? Likewise, we, confronted with space aliens whose technologies can build organisms using ultra-advanced 3D printers, would be unable to recognize that they were building designed objects? I intend these statements as rhetorical questions whose answer is obvious. What underwrites our causal explanations is our exposure to and understanding of the types of causes now in operation, not the idiosyncrasies of their operation. Because we are designers, we can appreciate design even if we are unable to replicate the design ourselves. Lost arts are lost because we are unable to replicate the design, not because we are unable to recognize the design. Rosenhouse’s quasi-Humean anti-design argument is ridiculous.


Next, “Darwinist Turns Math Cop: Track 1 and Track 2.”


Editor’s note: This review is cross-posted with permission of the author from BillDembski.com.