Search This Blog

Wednesday, 29 June 2022

Yet another strawman bully?

 Jason Rosenhouse and “Mathematical Proof”

William A. Dembski


I am reviewing Jason Rosenhouse’s new book, The Failures of Mathematical Anti-Evolutionism (Cambridge University Press), serially. For the full series so far, go here.


A common rhetorical ploy is to overstate an opponent’s position so much that it becomes untenable and even ridiculous. Jason Rosenhouse deploys this tactic repeatedly throughout his book. Design theorists, for instance, argue that there’s good evidence to think that the bacterial flagellum is designed, and they see mathematics as relevant to making such an evidential case. Yet with reference to the flagellum, Rosenhouse writes, “Anti-evolutionists make bold, sweeping claims that some complex system [here, the flagellum] could not have arisen through evolution. They tell the world they have conclusive mathematical proof of this.” (p. 152) I am among those who have made a mathematical argument for the design of the flagellum. And so, Rosenhouse levels that charge specifically against me: “Dembski claims his methods allow him to prove mathematically that evolution has been refuted …” (p. 136)


Rosenhouse, as a mathematician, must at some level realize that he’s prevaricating. It’s one thing to use mathematics in an argument. It’s quite another to say that one is offering a mathematical proof. The latter is much, much stronger than the former, and Rosenhouse knows the difference. I’ve never said that I’m offering a mathematical proof that systems like the flagellum are designed. Mathematical proofs leave no room for fallibility or error. Intelligent design arguments use mathematics, but like all empirical arguments they fall short of the deductive certainty of mathematical proof. I can prove mathematically that 6 is a composite number by pointing to 2 and 3 as factors. I can prove mathematically that 7 is a prime number by running through all the numbers greater than 1 and less than 7, showing that none of them divide it. But no mathematical proof that the flagellum is designed exists, and no design theorist that I know has ever suggested otherwise.


Rosenhouse’s Agenda

So, how did Rosenhouse arrive at the conclusion that I’m offering a mathematical proof of the flagellum’s design? I suspect the problem is Rosenhouse’s agenda, which is to discredit my work on intelligent design irrespective of its merit. Rosenhouse has no incentive to read my work carefully or to portray it accurately. For instance, he seizes on a probabilistic argument that I make for the flagellum’s design in my 2002 book No Free Lunch, characterizing it as a mathematical proof, and a failed one at that. But he has no possible justification for calling what I do there a mathematical proof. Note how I wrap up that argument — the very language used is as far from a mathematical proof as one can find (and I’ve proved my share of mathematical theorems, so I know):


Although it may seem as though I have cooked these numbers, in fact I have tried to be conservative with all my estimates. To be sure, there is plenty of biological work here to be done. The big challenge is to firm up these numbers and make sure they do not cheat in anybody’s favor. Getting solid, well-confirmed estimates for perturbation tolerance and perturbation identity factors [used to estimate probabilities gauging evolvability] will require careful scientific investigation. Such estimates, however, are not intractable. Perturbation tolerance factors can be assessed empirically by random substitution experiments where one, two, or a few substitutions are made. 


NO FREE LUNCH, PP. 301–302

Obviously, I’ve used mathematics here to make an argument. But equally obviously, I’m not claiming to have provided a mathematical proof. In the section where this quote appears, I’m laying out various mathematical and probabilistic techniques that can be used to make an evidential case for the flagellum’s design. It’s not a mathematical proof but an evidential argument, and not even a full-fledged evidential argument so much as a template for such an argument. In other words, I’m laying out what such an argument would look like if one filled in the biological and probabilistic details. 


All or Nothing

As such, the argument falls short of deductive certainty. Mathematical proof is all or nothing. Evidential support comes in degrees. The point of evidential arguments is to increase the degree of support for a claim, in this case for the claim that the flagellum is intelligently designed. A dispassionate reader would regard my conclusion here as measured and modest. Rosenhouse’s refutation, by contrast, is to set up a strawman, so overstating the argument that it can’t have any merit.


The reference to perturbation tolerance and perturbation identity factors here refers to the types of neighborhoods that are relevant to evolutionary pathways. Such neighborhoods and pathways were the subject of the two previous posts in this review series. These perturbation factors are probabilistic tools for investigating the evolvability of systems like the flagellum. They presuppose some technical sophistication, but their point is to try honestly to come to terms with the probabilities that are actually involved with real biological systems. 


At this point, Rosenhouse might feign shock, suggesting that I give the impression of presenting a bulletproof argument for the design of the flagellum, but that I’m now backpedaling, only to admit that the probabilistic evidence for the design of the flagellum is tentative. But here’s what’s actually happening. Mike Behe, in defining irreducible complexity, has identified a class of biological systems (those that are irreducibly complex) that resist Darwinian explanations and that implicate design. At the same time, there’s also this method for inferring design developed by Dembski. What happens if that method is applied to irreducibly complex systems? Can it infer design for such systems? That’s the question I’m trying to answer, and specifically for the flagellum.


Begging the Question?

Since the design inference, as a method, infers design by identifying what’s called specified complexity (more on this is coming up), Rosenhouse claims that my argument begs the question. Thus, I’m supposed to be presupposing that irreducible complexity makes it impossible for a system to evolve by Darwinian means. And from there I’m supposed to conclude that it must be highly improbable that it could evolve by Darwinian means (if it’s impossible, then it’s improbable). But that’s not what I’m doing. Instead, I’m using irreducible complexity as a signpost of where to look for biological improbability. Specifically, I’m using particular features of an irreducibly complex system like the bacterial flagellum to estimate probabilities related to its evolvability. I conclude, in the case of the flagellum, that those probabilities seem low and warrant a design inference. 


Now I might be wrong (that’s why I say the numbers need to be firmed up and we need to make sure no one is cheating). To this day, I’m not totally happy with the actual numbers in the probability calculation for the bacterial flagellum as presented in my book No Free Lunch. But that’s no reason for Rosenhouse and his fellow Darwinists to celebrate. The fact is that they have no probability estimates at all for the evolution of these systems. Worse yet, because they are so convinced that these systems evolved by Darwinian means, they know in advance, simply from their armchairs, that the probabilities must be high. The point of that section in No Free Lunch was less to do a definitive calculation for the flagellum as to lay out the techniques for calculating probabilities in such cases (such as the perturbation probabilities). 


In his book, Rosenhouse claims that I have “only once tried to apply [my] method to an actual biological system” (p. 137), that being to the flagellum in No Free Lunch. And, obviously, he thinks I failed in that regard. But as it is, I have applied the method elsewhere, and with more convincing numbers. See, for instance, my analysis of Doug Axe’s investigation into the evolvability of enzyme folds in my 2008 book The Design of Life (co-authored with Jonathan Wells; see chapter seven). My design inferential method yields much firmer conclusions there than for the flagellum for two reasons: (1) the numbers come from the biology as calculated by biologists (in this case, the biologist is Axe), and (2) the systems in question (small enzymatic proteins with 150 or so amino acids) are much easier to analyze than big molecular machines like the flagellum, which have tens of thousands of protein subunits. 


Hiding Behind Complexities

Darwinists have always hidden behind the complexities of biological systems. Instead of coming to terms with the complexities, they turn the tables and say: “Prove us wrong and show that these systems didn’t evolve by Darwinian means.” As always, they assume no burden of proof. Given the slipperiness of the Darwinian mechanism, in which all interesting evolution happens by co-option and coevolution, where structures and functions must both change in concert and crucial evolutionary intermediates never quite get explicitly identified, Darwinists have essentially insulated their theory from challenge. So the trick for design theorists looking to apply the design inferential method to actual biological systems is to find a Goldilocks zone in which a system is complex enough to yield design if the probabilities can be calculated and yet simple enough for the probabilities actually to be calculated. Doug Axe’s work is, in my view, the best in this respect. We’ll return to it since Axe also comes in for criticism from Rosenhouse.


Next, “Jason Rosenhouse and Specified Complexity.”


Editor’s note: This review is cross-posted with permission of the author from BillDembski.com.

Peace: JEHOVAH'S gift to his people.

Malachi3:18NIV"And you will again see the distinction between the righteous and the wicked, between those who serve God and those who do not."

1John3:10NIV"This is how we know who the children of God are and who the children of the devil are: Anyone who does not do what is right is not God’s child, nor is anyone who does not love their brother and sister."

Micah4:1-3ASV"But in the latter days it shall come to pass, that the mountain of Jehovah's house shall be established on the top of the mountains, and it shall be exalted above the hills; and peoples shall flow unto it.


2And many nations shall go and say, Come ye, and let us go up to the mountain of Jehovah, and to the house of the God of Jacob; and he will teach us of his ways, and we will walk in his paths. For out of Zion shall go forth the law, and the word of Jehovah from Jerusalem;


3and he will judge between many peoples, and will decide concerning strong nations afar off: and they shall beat their swords into plowshares, and their spears into pruning-hooks; nation shall not lift up sword against nation, neither shall they learn war any more."

Peace is the metric by which a distinction is to be made not merely between the individual who has truly dedicated himself to JEHOVAH'S service and the one whose profession of such a dedication is questionable. But the people who are truly in a covenant relationship with the one true God and the churches whose profession of such a relationship cannot withstand unbiased scrutiny. 

Wednesday, 22 June 2022

Darwinism's deafening silence on a plausible path to new organs.

 The Silence of the Evolutionary Biologists

William A. Dembski

I am reviewing Jason Rosenhouse’s new book, The Failures of Mathematical Anti-Evolutionism (Cambridge University Press), serially. For the full series so far, go here.


The Darwinian community has been strikingly unsuccessful in showing how complex biological adaptations evolved, or even how they might have evolved, in terms of detailed step-by-step pathways between different structures performing different functions (pathways that must exist if Darwinian evolution holds). Jason Rosenhouse admits the problem when he says that Darwinians lack “direct evidence” of evolution and must instead depend on “circumstantial evidence.” (pp. 47–48) He elaborates: “As compelling as the circumstantial evidence for evolution is, it would be better to have direct experimental confirmation. Sadly, that is impossible. We have only the one run of evolution on this planet to study, and most of the really cool stuff happened long ago.” (p. 208) How very convenient. 


Design theorists see the lack of direct evidence for Darwinian processes creating all that “cool stuff” — in the ancient past no less — as a problem for Darwinism. Moreover, they are unimpressed with the circumstantial evidence that convinces Darwinists that Darwin got it right. Rosenhouse, for instance, smugly informs his readers that “eye evolution is no longer considered to be especially mysterious.” (p. 54) It’s not that the human eye and the visual cortex with which it is integrated are even remotely well enough understood to underwrite a realistic model of how the human eye might have evolved. The details of eye evolution, if such details even exist, remain utterly mysterious.


A Crude Similarity Metric

Instead, Rosenhouse does the only thing that Darwinists can do when confronted with the eye: point out that eyes of many different complexities exist in nature, relate them according to some crude similarity metric (whether structurally or genetically), and then simply posit that gradual step-by-step evolutionary paths connecting them exist (perhaps by drawing arrows to connect similar eyes). Sure, Darwinists can produce endearing computer models of eye evolution (what two virtual objects can’t be made to evolve into each other on a computer?). And they can look for homologous genes and proteins among differing eyes (big surprise that similar structures may use similar proteins). But eyes have to be built in embryological development, and eyes evolving by Darwinian means need a step-by-step path to get from one to the other. No such details are ever forthcoming. Credulity is the sin of Darwinists.


Intelligent design’s scientific program can thus, at least in part, be viewed as an attempt to unmask Darwinist credulity. The task, accordingly, is to find complex biological systems that convincingly resist a gradual step-by-step evolution. Alternatively, it is to find systems that strongly implicate evolutionary discontinuity with respect to the Darwinian mechanism because their evolution can be seen to require multiple coordinated mutations that cannot be reduced to small mutational steps. Michael Behe’s irreducibly complex molecular machines, such as the bacterial flagellum, described in his 1996 book Darwin’s Black Box, provided a rich set of examples for such evolutionary discontinuity. By definition, a system is irreducibly complex if it has core components for which the removal of any of them causes it to lose its original function.


No Plausible Pathways

Interestingly, in the two and a half decades since Behe published that book, no convincing, or even plausible, detailed Darwinian pathways have been put forward to explain the evolution of these irreducibly complex systems. The silence of evolutionary biologists in laying out such pathways is complete. Which is not to say that they are silent on this topic. Darwinian biologists continue to proclaim that irreducibly complex biochemical systems like the bacterial flagellum have evolved and that intelligent design is wrong to regard them as designed. But such talk lacks scientific substance.


Next, “From Darwinists, a Shift in Tone on Nanomachines.”


Editor’s note: This review is cross-posted with permission of the author from BillDembski.com.

For Darwinism humor is no laughing matter.

 There’s Nothing Funny About Evolution

Geoffrey Simmons


Much like the genetic blueprints given to each of us at conception, blueprints for pumping blood, exchanging carbon dioxide for oxygen, digesting food, eliminating food, and retaining memories, we come with a built-in sense of humor. Could our sense of humor have evolved, meaning come about by millions of tiny, modifying, successive steps over millions of years? Or, did it arrive in one lump sum, by design? There are good reasons to suspect the latter.  But first some background musings.


For one thing, genetic studies suggest those folks with a better sense of humor have a shorter allele of gene 5-HTTLPR. In addition, we know there are many physiological benefits to laughter. Oxygenation is increased, cardiac function is improved, stress hormones, such as cortisol and adrenaline, are reduced, the immune system is charged up, and the dopaminergic system, which fights depression, is strengthened.


Norman Cousins, a past Adjunct Professor at UCLA, in his book Anatomy of an Illness as Perceived by the Patient, and in an article in The New England Journal of Medicine, wrote about how he lowered his pain levels from ankylosing spondylitis, from a 10 to a 2. Ten minutes of laughter gave him two hours of pain-free sleep. Much of this laughter came from watching TV. Nowadays, if one is over 13 years old, one might need to find a different medium.


We’re told that laughing 100 times is equal to 10 minutes on a rowing machine or 15 minutes on an exercise bike. Perhaps one could frequent a comedy club nightly and skip those painful, daily exercises. Humor helps us when times are stressful, when we’re courting, and when we’re depressed. Students enjoy their teachers, pay more attention, and remember more information when humor is added to classroom instruction. Humor promotes better bonding between student and teacher, and between most couples. It also helps with hostage negotiations.


A Darwinian Scenario

If our sense of humor came about by tiny steps, like other functions, as proposed by Charles Darwin, scientists have yet to find proof of it. Think of it: can hearing the beginning words of a joke even be funny? Is there any benefit to survival with one-word jokes that eventually become two- and three-word jokes? I, doubt it, but that’s just my personal opinion. 


Fish talk by means of gestures, electrical impulses, bioluminescence, and sounds like hard-to-hear purrs, croaks, and pops. But, did they (or could they) bring their jokes ashore millions of years ago? Of course, there’s no evidence of that. Yet? Just maybe one might envision the fish remaining in the water teasing the more adventuresome fish about their ooohs and aahs, issued while walking across burning-hot sands. 


Tickling a Rat

Laughing while being tickled is not the same as having a sense of humor. The response to someone reaching into one’s armpit is a neurological and physiological reaction to being touched. For some, tickling is torture. I had one rather serious female patient, who, when undressed and covered with a sheet, was ticklish from her neck to her toes. She was nearly impossible to examine. Sometimes she would start laughing as I approached her.


One can tickle a rat, and given the right equipment, record odd utterances that might be laughter. But it might easily be profanity. Some say one can tickle a sting ray, but others say the animal is suffocating. Attempts to tickle a crocodile and other wild animals have not been conducted, as far as I’m aware, in any depth. Also, such attempts are not recommended.


Laughing is clearly part of the human package, part of our design. As I see it, there can only be two possible origins. Humor evolved very, very slowly, or it came about more quickly by intelligent design. Negative feedback loops might argue against the slow development. Some fringe thinkers might speculate that extraterrestrials passed on their sense of humor to us, millions of years ago, but, if so, jokes about the folks in the Andromeda galaxy are on a different wavelength. Jokes about Uranus, of course, are local.


Sorry About that Last One, Folks

A sense of humor varies from person to person, much like height, weight, and abdominal girth. Plus, there are gender differences. Women like men who make them laugh; men like women who laugh at their jokes. Comedians say a sense of humor is a mating signal indicating high intelligence. People on Internet dating sites often ask each other about their sense of humor. Of course, we all have great senses of humor. Just ask anyone.


A sense of humor is often highly valued. Couples get along better when they have similar senses of humor. Mutation is more likely to ruin a good joke than help it. A serious mutation might take out the entire punchline. Jokes about a partner’s looks or clothes are to be avoided. They might lead to domestic abuse. Happy tears are chemically different from sad tears. Both are different from the tears that cleanse the eye with each blink or react to infections. Can anyone explain that? Could specific tears have come about by accident?


We know laughing is a normal human activity. Some days are better than others. Human babies often smile and giggle before they are two months old, years before they will understand a good riddle. Deaf and blind babies smile and giggle at virtually that same age. Is that present to make them more lovable? Children laugh up to 400 times a day, adults only 15 times per day. This could mean we need to hear many more jokes on a daily basis.


What Humor Means

 We all think we know what humor means, but because it can vary among people, we really don’t. An amusing joke told man-to-man might be a nasty joke if told man-to-woman. Or, the other way around. Humor tends to be intangible. It’s somewhat like certain foods tasting good to you, but maybe not to me. Too salty versus needs more salt? Or sweetener? I once told my medical partner that my wife and I had just seen the funniest movie we had ever seen. He and his wife went out that very night to see it and didn’t find anything in it funny. Nothing at all! Not even the funniest scene I have ever seen in a movie. Go figure. 


What does having a good sense of humor mean? Might it be reciting a lot of relevant jokes from a repository, making up funny quips during conversations, or laughing a lot at most anything except someone else’s pain? Or a mix?


There’s a laughter-like sound that is made by chimps, bonobos, and gorillas while playing. But does it mean there’s a sense of humor at work, or monkey profanity? They might be calling each other bad names. Octopuses play but don’t smile orlaugh, we think. Dolphins “giggle” using different combinations of whistles and clicks. It does seem like they are laughing at times, but nobody knows for sure. Maybe it’s just a case of anthropomorphizing. The dolphin family has been around approximately 11 million years and the area of their brain that processes language is much larger than ours. They’ve had plenty of time to come up with several good ones.


Koko the Humorous Gorilla

Perhaps, the most interesting case was Koko the gorilla who was taught to sign. She recently died after 46 years. Her vocabulary was at least 1,000 words by signing and another 2,000 words by hearing. Some say she was a jokester. She loved Robin Williams. Maybe adored him. The two would play together for hours. Koko seemed to make up jokes. She once tore the sink out of the wall in her cage; when asked about it, she signed that her pet cat did it. However, the cat wasn’t tall enough.


 So I ask again, could a sense of humor have come about by numerous, successive, slight modifications, a Darwinian requirement? If humor fails that test, might humor be the elusive coup de grace for naturalism? Since irreducible complexity, specified complexity, and topoisomerases haven’t landed the KO to Darwin’s weakening theories, might the answer just be as simple as laughing at them?


If a sense of humor were just a variation on tickling, my guess is that comedians would come off the stage or hire teenagers to walk among their audiences to tickle everyone. Imagine being dressed up for the night, maybe eating a fancy meal or drinking expensive champagne, and some grubby kid, who’s paid minimum wage, is reaching into your armpits.


Why Laugh at All? 

Is a sense of humor a byproduct, an accident, or was it installed on purpose? For better health? There definitely seems to be a purpose. Could it be a coping mechanism? Is it the way to meet the right mate? Surely, that must be part of it.


The only evolution-related quip I could think of sums up this discussion rather well:


A little girl asked her mother, “How did the human race come about?”


The mother answered, “God made Adam and Eve. They had children, and so all mankind was made.”


A few days later, the little girl asked her father the same question. The father answered, “Many years ago there were apelike creatures, and we developed from them.”


The confused girl returned to her mother and said, “Mom, how is it possible that you told me that the human race was created by God , and Papa says we developed from ‘apelike creatures’?”


The mother answered, “Well, dear, it is very simple. I told you about the origin of my side of the family, and your father told you about his.”

Man does not compute?

 The Non-Computable Human

Robert J. Marks II


Editor’s note: We are delighted to present an excerpt from Chapter 1 of the new book Non-Computable You: What You Do that Artificial Intelligence Never Will, by computer engineer Robert J. Marks, director of Discovery Institute’s Bradley Center for Natural and Artificial Intelligence.


If you memorized all of Wikipedia, would you be more intelligent? It depends on how you define intelligence. 


Consider John Jay Osborn Jr.’s 1971 novel The Paper Chase. In this semi-autobiographical story about Harvard Law School, students are deathly afraid of Professor Kingsfield’s course on contract law. Kingfield’s classroom presence elicits both awe and fear. He is the all-knowing professor with the power to make or break every student. He is demanding, uncompromising, and scary smart. In the iconic film adaptation, Kingsfield walks into the room on the first day of class, puts his notes down, turns toward his students, and looms threateningly.


“You come in here with a skull full of mush,” he says. “You leave thinking like a lawyer.” Kingsfield is promising to teach his students to be intelligent like he is. 


One of the law students in Kingsfield’s class, Kevin Brooks, is gifted with a photographic memory. He can read complicated case law and, after one reading, recite it word for word. Quite an asset, right?


Not necessarily. Brooks has a host of facts at his fingertips, but he doesn’t have the analytic skills to use those facts in any meaningful way.


Kevin Brooks’s wife is supportive of his efforts at school, and so are his classmates. But this doesn’t help. A tutor doesn’t help. Although he tries, Brooks simply does not have what it takes to put his phenomenal memorization skills to effective use in Kingsfield’s class. Brooks holds in his hands a million facts that because of his lack of understanding are essentially useless. He flounders in his academic endeavor. He becomes despondent. Eventually he attempts suicide. 


Knowledge and Intelligence

This sad tale highlights the difference between knowledge and intelligence. Kevin Brooks’s brain stored every jot and tittle of every legal case assigned by Kingsfield, but he couldn’t apply the information meaningfully. Memorization of a lot of knowledge did not make Brooks intelligent in the way that Kingsfield and the successful students were intelligent. British journalist Miles Kington captured this distinction when he said, “Knowing a tomato is a fruit is knowledge. Intelligence is knowing not to include it in a fruit salad.”


Which brings us to the point: When discussing artificial intelligence, it’s crucial to define intelligence. Like Kevin Brooks, computers can store oceans of facts and correlations; but intelligence requires more than facts. True intelligence requires a host of analytic skills. It requires understanding; the ability to recognize humor, subtleties of meaning, and symbolism; and the ability to recognize and disentangle ambiguities. It requires creativity.


Artificial intelligence has done many remarkable things. AI has largely replaced travel agents, tollbooth attendants, and mapmakers. But will AI ever replace attorneys, physicians, military strategists, and design engineers, among others?


The answer is no. And the reason is that as impressive as artificial intelligence is — and make no mistake, it is fantastically impressive — it doesn’t hold a candle to human intelligence. It doesn’t hold a candle to you.


And it never will. How do we know? The answer can be stated in a single four-syllable word that needs unpacking before we can contemplate the non-computable you. That word is algorithm. If not expressible as an algorithm, a task is not computable.


Algorithms and the Computable

An algorithm is a step-by-step set of instructions to accomplish a task. A recipe for German chocolate cake is an algorithm. The list of ingredients acts as the input for the algorithm; mixing the ingredients and following the baking and icing instructions will result in a cake.


Likewise, when I give instructions to get to my house, I am offering an algorithm to follow. You are told how far to go and which direction you are to turn on what street. When Google Maps returns a route to go to your destination, it is giving you an algorithm to follow. 


Humans are used to thinking in terms of algorithms. We make grocery lists, we go through the morning procedure of showering, hair combing, teeth brushing, and we keep a schedule of what to do today. Routine is algorithmic. Engineers algorithmically apply Newton’s laws of physics when designing highway bridges and airplanes. Construction plans captured on blueprints are part of an algorithm for building. Likewise, chemical reactions follow algorithms discovered by chemists. And all mathematical proofs are algorithmic; they follow step-by-step procedures built on the foundations of logic and axiomatic presuppositions. 


Algorithms need not be fixed; they can contain stochastic elements, such as descriptions of random events in population genetics and weather forecasting. The board game Monopoly, for example, follows a fixed set of rules, but the game unfolds through random dice throws and player decisions.


Here’s the key: Computers only do what they’re programmed by humans to do, and those programs are all algorithms — step-by-step procedures contributing to the performance of some task. But algorithms are limited in what they can do. That means computers, limited to following algorithmic software, are limited in what they can do.


This limitation is captured by the very word “computer.” In the world of programmers, “algorithmic” and “computable” are often used interchangeably. And since “algorithmic” and “computable” are synonyms, so are “non-computable” and “non-algorithmic.”


Basically, for computers — for artificial intelligence — there’s no other game in town. All computer programs are algorithms; anything non-algorithmic is non-computable and beyond the reach of AI.


But it’s not beyond you. 


Non-Computable You

Humans can behave and respond non-algorithmically. You do so every day. For example, you perform a non-algorithmic task when you bite into a lemon. The lemon juice squirts on your tongue and you wince at the sour flavor. 


Now, consider this: Can you fully convey your experience to a man who was born with no sense of taste or smell? No. You cannot. The goal is not a description of the lemon-biting experience, but its duplication. The lemon’s chemicals and the mechanics of the bite can be described to the man, but the true experience of the lemon taste and aroma cannot be conveyed to someone without the necessary senses.


If biting into a lemon cannot be explained to a man without all his functioning senses, it certainly can’t be duplicated in an experiential way by AI using computer software. Like the man born with no sense of taste or smell, machines do not possess qualia — experientially sensory perceptions such as pain, taste, and smell. 


Qualia are a simple example of the many human attributes that escape algorithmic description. If you can’t formulate an algorithm explaining your lemon-biting experience, you can’t write software to duplicate the experience in the computer.


Or consider another example. I broke my wrist a few years ago, and the physician in the emergency room had to set the broken bones. I’d heard beforehand that bone-setting really hurts. But hearing about pain and experiencing pain are quite different. 


To set my broken wrist, the emergency physician grabbed my hand and arm, pulled, and there was an audible crunching sound as the bones around my wrist realigned. It hurt. A lot. I envied my preteen grandson, who had been anesthetized when his broken leg was set. He slept through his pain.


Is it possible to write a computer program to duplicate — not describe, but duplicate — my pain? No. Qualia are not computable. They’re non-algorithmic.


By definition and in practice, computers function using algorithms. Logically speaking, then, the existence of the non-algorithmic suggests there are limits to what computers and therefore AI can do.

Darwinists attempt to correct God again.

 From Darwinists, a Shift in Tone on Nanomachines

William A. Dembski


I am reviewing Jason Rosenhouse’s new book, The Failures of Mathematical Anti-Evolutionism (Cambridge University Press), serially. For the full series so far, go here.


Unfortunately for Darwinists, irreducible complexity raises real doubts about Darwinism in people’s minds. Something must be done. Rising to the challenge, Darwinists are doing what must be done to control the damage. Take the bacterial flagellum, the poster child of irreducibly complex biochemical machines. Whatever biologists may have thought of its ultimate origins, they tended to regard it with awe. Harvard’s Howard Berg, who discovered that flagellar filaments rotate to propel bacteria through their watery environments, would in public lectures refer to the flagellum as “the most efficient machine in the universe.” (And yes, I realize there are many different bacteria sporting many different variants of the flagellum, including the souped-up hyperdrive magnetotactic bacteria, which swim ten times faster than E. coli — E. coli’s flagellum, however, seems to be the one most studied.)

Why “Machines”?

In 1998, writing for a special issue of Cell, the National Academy of Sciences president at the time, Bruce Alberts, remarked:


We have always underestimated cells… The entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of a set of large protein machines… Why do we call the large protein assemblies that underlie cell function protein machines? Precisely because, like machines invented by humans to deal efficiently with the macroscopic world, these protein assemblies contain highly coordinated moving parts. [Emphasis in the original.]


A few years later, in 2003, Adam Watkins, introducing a special issue on nanomachines for BioEssays, wrote: 


The articles included in this issue demonstrate some striking parallels between artifactual and biological/molecular machines. In the first place, molecular machines, like man-made machines, perform highly specific functions. Second, the macromolecular machine complexes feature multiple parts that interact in distinct and precise ways, with defined inputs and outputs. Third, many of these machines have parts that can be used in other molecular machines (at least, with slight modification), comparable to the interchangeable parts of artificial machines. Finally, and not least, they have the cardinal attribute of machines: they all convert energy into some form of ‘work’.


Neither of these special issues offered detailed step-by-step Darwinian pathways for how these machine-like biological systems might have evolved, but they did talk up their design characteristics. I belabor these systems and the special treatment they received in these journals because none of the mystery surrounding their origin has in the intervening years been dispelled. Nonetheless, the admiration that they used to inspire has diminished. Consider the following quote about the flagellum from Beeby et al.’s 2020 article on propulsive nanomachines. Rosenhouse cites it approvingly, prefacing the quote by claiming that the flagellum is “not the handiwork of a master engineer, but is more like a cobbled-together mess of kludges” (pp. 151–152):


Many functions of the three propulsive nanomachines are precarious, over-engineered contraptions, such as the flagellar switch to filament assembly when the hook reaches a pre-determined length, requiring secretion of proteins that inhibit transcription of filament components. Other examples of absurd complexity include crude attachment of part of an ancestral ATPase for secretion gate maturation, and the assembly of flagellar filaments at their distal end. All cases are absurd, and yet it is challenging to (intelligently) imagine another solution given the tools (proteins) to hand. Indeed, absurd (or irrational) design appears a hallmark of the evolutionary process of co-option and exaptation that drove evolution of the three propulsive nanomachines, where successive steps into the adjacent possible function space cannot anticipate the subsequent adaptations and exaptations that would then become possible. 


The shift in tone from then to now is remarkable. What happened to the awe these systems used to inspire? Have investigators really learned so much in the intervening years to say, with any confidence, that these systems are indeed over-engineered? To say that something is over-engineered is to say that it could be simplified without loss of function (like a Rube Goldberg device). And what justifies that claim here? Have scientists invented simpler systems that in all potential environments perform as well as or better than the systems in question? Are they able to go into existing flagellar systems, for instance, and swap out the over-engineered parts with these more efficient (sub)systems? Have they in the intervening years gained any real insight into the step-by-step evolution of these systems? Or are they merely engaged in rhetoric to make flagellar motors seem less impressive and thus less plausibly the product of design? To pose these questions is to answer them.


A Quasi-Humean Spirit

Rosenhouse even offers a quasi-Humean anti-design argument. Humans are able to build things like automobiles, but not things like organisms. Accordingly, ascribing design to organisms is an “extravagant extrapolation” from “causes now in operation.” Rosenhouse’s punchline: “Based on our experience, or on comparisons of human engineering to the natural world, the obvious conclusion is that intelligence cannot at all do what they [i.e., ID proponents] claim it can do. Not even close. Their argument is no better than saying that since moles are seen to make molehills, mountains must be evidence for giant moles.” (p. 273) 


Seriously?! As Richard Dawkins has been wont to say, “This is a transparently feeble argument.” So, primitive humans living with stone-age technology, if they were suddenly transported to Dubai, would be unable to get up to speed and recognize design in the technologies on display there? Likewise, we, confronted with space aliens whose technologies can build organisms using ultra-advanced 3D printers, would be unable to recognize that they were building designed objects? I intend these statements as rhetorical questions whose answer is obvious. What underwrites our causal explanations is our exposure to and understanding of the types of causes now in operation, not the idiosyncrasies of their operation. Because we are designers, we can appreciate design even if we are unable to replicate the design ourselves. Lost arts are lost because we are unable to replicate the design, not because we are unable to recognize the design. Rosenhouse’s quasi-Humean anti-design argument is ridiculous.


Next, “Darwinist Turns Math Cop: Track 1 and Track 2.”


Editor’s note: This review is cross-posted with permission of the author from BillDembski.com.

Tuesday, 21 June 2022

The enemy of my enemy..?


At the house next door: No one's home?

 New Analysis Casts Doubt on Claims for Life on Venus

Evolution News @DiscoveryCSC


A new study throws cold water (vapor?) on an earlier paper that suggested that aerial life forms could exist in Venus’s massive cloud cover:


Researchers from the University of Cambridge used a combination of biochemistry and atmospheric chemistry to test the ‘life in the clouds’ hypothesis, which astronomers have speculated about for decades, and found that life cannot explain the composition of the Venusian atmosphere.


Any life form in sufficient abundance is expected to leave chemical fingerprints on a planet’s atmosphere as it consumes food and expels waste. However, the Cambridge researchers found no evidence of these fingerprints on Venus. 


UNIVERSITY OF CAMBRIDGE, “NO SIGNS (YET) OF LIFE ON VENUS” AT SCIENCE DAILY (JUNE 14, 2022) THE PAPER IS OPEN ACCESS.

The contention in the earlier paper was that chemicals present in Venus’s clouds are consistent with production by life forms.


Not a Biosignature

Although the authors of the study published last week, Jordan Chortle and P. B. Rimmer, say that the specifics of Venus’s atmospheric chemistry are not a biosignature (evidence of life), they stress that the atmosphere on Venus is nonetheless “strange.”

They hope that their work will assist in identifying other promising sites for extraterrestrial life:


”To understand why some planets are alive, we need to understand why other planets are dead,” said Shorttle. “If life somehow managed to sneak into the Venusian clouds, it would totally change how we search for chemical signs of life on other planets.”


“Even if ‘our’ Venus is dead, it’s possible that Venus-like planets in other systems could host life,” said Rimmer, who is also affiliated with Cambridge’s Cavendish Laboratory. “We can take what we’ve learned here and apply it to exoplanetary systems — this is just the beginning.”

They hope their method of analysis will prove a help later this year when the James Webb Space Telescope starts returning images of planets outside our solar system.


Read the rest at Mind Matters News, published by Discovery Institute’s Bradley Center for Natural and Artificial Intelligence.



Paleo Darwinism V. evolution in general?

 Jason Rosenhouse, a Crude Darwinist

William A. Dembski


I am reviewing Jason Rosenhouse’s new book, The Failures of Mathematical Anti-Evolutionism (Cambridge University Press), serially. For the full series so far, go here.


For Rosenhouse, Darwin can do no wrong and Darwin’s critics can do no right. As a fellow mathematician, I would have liked to see from Rosenhouse a vigorous and insightful discussion of my ideas, especially where there’s room for improvement, as well as some honest admission of why neo-Darwinism falls short as a compelling theory of biological evolution and why mathematical criticisms of it could at least have some traction. Instead, Rosenhouse assumes no burden of proof, treating Darwin’s theory as a slam dunk and treating all mathematical criticisms of Darwin’s theory as laughable. Indeed, he has a fondness for the word “silly,” which he uses repeatedly, and according to him mathematicians who use math to advance intelligent design are as silly as they come.


Anti-Evolutionism or Anti-Darwinism?

In using the phrase “mathematical anti-evolutionism,” Rosenhouse mistitled his book. Given its aim and arguments, it should have been titled The Failures of Mathematical Anti-Darwinism. Although design theorists exist who reject the transformationism inherent in evolutionism (I happen to be one of them), intelligent design’s beef is not with evolution per se but with the supposed naturalistic mechanisms driving evolution. And when it comes to naturalistic mechanisms driving evolution, there’s only one game in town, namely, neo-Darwinism, which I’ll refer to simply as Darwinism. In any case, my colleague Michael Behe, who also comes in for criticism from Rosenhouse, is an evolutionist. Behe accepts common descent, the universal common ancestry of all living things on planet earth. And yet Behe is not a Darwinist — he sees Darwin’s mechanism of natural selection acting on random variations as having at best very limited power to explain biological innovation. 


Reflexive Darwinism

Rosenhouse is a Darwinist, and a crude reflexive one at that. For instance, he will write: “Evolution only cares about brute survival. A successful animal is one that inserts many copies of its genes into the next generation, and one can do that while being not very bright at all.” (p. 14) By contrast, more nuanced Darwinists (like Robert Wright) will stress how Darwinian processes can enhance cooperation. Others (like Geoffrey Miller) will stress how sexual selection can put a premium on intelligence (and thus on “being bright”). But Rosenhouse’s Darwinism plays to the lowest common denominator. Throughout the book, he hammers on the primacy of natural selection and random variation, entirely omitting such factors as symbiosis, gene transfer, genetic drift, the action of regulatory genes in development, to say nothing of self-organizational processes.


Rosenhouse’s Darwinism commits him to Darwinian gradualism: Every adaptation of organisms is the result of a gradual step-by-step evolutionary process with natural selection ensuring the avoidance of missteps along the way. Writing about the evolution of “complex biological adaptations,” he notes: “Either the adaptation can be broken down into small mutational steps or it cannot. Evolutionists say that all adaptations studied to date can be so broken down while anti-evolutionists deny this…” (p. 178) At the same time, Rosenhouse denies that adaptations ever require multiple coordinated mutational steps: “[E]volution will not move a population from point A to point B if multiple, simultaneous mutations are required. No one disagrees with this, but in practice there is no way of showing that multiple, simultaneous mutations are actually required.” (pp. 159–160) 


“Mount Improbable”

And why are multiple simultaneous mutations strictly verboten? Because they would render life’s evolution too improbable, making it effectively impossible for evolution to climb Mount Improbable (which is both a metaphor and the title of a book by Richard Dawkins). Simultaneous mutations throw a wrench in the Darwinian gearbox. If they played a significant role in evolution, Darwinian gradualism would become untenable. Accordingly, Rosenhouse maintains that such large-scale mutational changes never happen and are indemonstrable even if they do happen. Rosenhouse presents this point of view not with a compelling argument, but as an apologist intent on neutralizing intelligent design’s threat to Darwinism. 


Next, “The Silence of the Evolutionary Biologists.”


Editor’s note: This review is cross-posted with permission of the author from BillDembski.com.

It looks like technology because it is?

 Physicist Brian Miller: The Fruitful Marriage of Biology and Engineering

David Klinghoffer


Discovery Institute physicist Brian Miller spoke at the recent Dallas Conference on Science and Faith. His theme was “The Surprising Relevance of Engineering in Biology.” 


Afterward, moderated by John West, he took some very thoughtful questions from the audience. Miller notes the fruitful marriage of biology and engineering, as in, for example, the study of control systems: “What you find is parallel research: that biologists are understanding these systems, engineers independently discover these systems, and when they work together they’re looking at the overlap. So, what’s happening now is engineers are learning from biology to do engineering better.” If biology isn’t designed, which is another way of saying “engineered,” wouldn’t this state of affairs be pretty counterintuitive? Enjoy the rest of the Q&A with Dr. Miller:

<iframe width="770" height="433" src="https://www.youtube.com/embed/TH4Woh9S1ig" title="Brian Miller Answers Questions about the Relevance of Engineering to Biology" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

A peacemaker between mathematics and Darwinism?

 The Challenge from Jason Rosenhouse

William A. Dembski


I am reviewing Jason Rosenhouse’s new book, The Failures of Mathematical Anti-Evolutionism (Cambridge University Press), serially. For the full series so far, go here.


To show readers that he means business and that he is a bold, brave thinker, Rosenhouse lays down the gauntlet: “Anti-evolutionists play well in front of friendly audiences because in that environment the speakers never pay the price of being wrong. The response would be a lot chillier if they tried the same arguments in front of audiences with the relevant expertise. Try telling a roomful of mathematicians that you can refute evolutionary theory with a few back-of-the-envelope probability calculations, and see how far you get.” (Epilogue, pp. 270-271)


I’m happy to take up Rosenhouse’s gauntlet. In fact, I already have. I’ve presented my ideas and arguments to roomfuls of not just mathematicians but also biologists and the whole range of scientists on whose disciplines my work impinges. A case in point is a 2014 talk I gave on conservation of information at the University of Chicago, a talk sponsored by my old physics advisor Leo Kadanoff. The entire talk, including Q&A, is available on YouTube:

In such talks, I present quite a bit more detail than a mere back-of-the-envelope probability calculation, though full details, in a single talk (as opposed to a multi-week seminar), require referring listeners to my work in the peer-reviewed literature (none of which Rosenhouse cites in his book). 


My Challenge to Jason Rosenhouse

If I receive a chilly reception in giving such talks, it’s not for any lack of merit in my ideas or work. Rather, it’s the prejudicial contempt evident in Rosenhouse’s challenge above, which is widely shared among Darwinists, who are widespread in the academy. For instance, Rosenhouse’s comrade in arms, evolutionary biologist Jerry Coyne, who is at the University of Chicago, tried to harass Leo into canceling my 2014 talk, but Leo was not a guy to be intimidated — the talk proceeded as planned (Leo sent me copies of the barrage of emails he received from Coyne to persuade him to uninvite me). For the record, I’m happy to debate Rosenhouse, or any mathematicians, engineers, biologists, or whatever, who think they can refute my work. 


Next, “Jason Rosenhouse, a Crude Darwinist.”


Editor’s note: This review is cross-posted with permission of the author from BillDembski.com.

Wednesday, 15 June 2022

Sacrifice without cost?

Chronicles21:24KJV" And king David said to Ornan, Nay; but I will verily buy it for the full price: for I will not take that which is thine for the LORD, nor offer burnt offerings without cost."


King David realized that a cost free sacrifice is in effect no sacrifice at all. Yet is this not the effect that Christendom's theology re: Christ being the God-man and unconditional immortality have on the supposed atonement. Christendom's reductive spiritualism has the effect of rendering the physical body (somos) worse than useless, a prison of rotting flesh that anchors our "real selves" to the ground during our probation on this earth. Surely being liberated from any prison is a blessing and not a sacrifice.


Matthew20:28KJV"Even as the Son of man came not to be ministered unto, but to minister, and to give his life(gk.psyche) a ransom for many." 


Obviously if Christ's real self(soul) was immortal or if he was god-man or both he could not give his soul as a ransom. The mere liberation of his true self from its prison of flesh would constitute no genuine sacrifice. For Christ atonement offering to be genuinely  substitutionary his death would have to be identical in nature to that of the first Adam.


1Corithians15:21KJV"For since by man came death, by man came also the resurrection of the dead. "


And as to nature of the first Adam's death, let's not speculate, but let JEHOVAH'S word be the authority.


Genesis3:19KJV"In the sweat of thy face shalt thou eat bread, till thou RETURN unto the ground; for out of it wast thou taken: for dust thou art, and unto dust shalt thou RETURN."


Thus Adam was to RETURN to his pre-creation state. That is what death meant to Adam. For the second Adam to to serve as a genuine substitute to the first and thus effect an atonement his death MUST have the same significance. 

Why we must give the members of Christendom's trinity a 'fail' in Godhood.

 What is meant by the expression fully God? The bible tells us that there is just one who is autotheos and thus entitled to absolute worship.

1Corinthians8:6NIV"yet for us there is but one God, the Father, from whom all things came..."

John17:3KJV"And this is life eternal, that they might know thee the only true God,..."

Biblical theology tells us that there are four qualities that set the Lord JEHOVAH apart as uniquely qualified to receive absolute worship.

1. He is both necessary and sufficient as the source and sustainer of life and everything required for its flourishing.

2. He is superlative in authority being without equal or even approximate.

3. He is totally immutable.

4. He is omnipotent/omniscient.

Can any member of Christendom's trinity thus be considered fully God in any meaningful sense?

Obviously no member of Christendom's triad can be both necessary and sufficient, as a first cause if any of the three are sufficient as a first cause the other two are made unnecessary and if all three are necessary none are sufficient.

As per the dictionary definition of superlative one can either be superlative or coequal but not both, thus none of Christendom's triad would qualify as superlative.

Malachi3:6ASV"For I, Jehovah, change not; therefore ye, O sons of Jacob, are not consumed."

According to Christendom, JEHOVAH'S plain declaration that he is not subject to even the least change actually means he is subject to infinite change thus he could become a creature subject to death. We reject the fantastic leaps of logic and mental contortions needed to concur with such nonsense. Thus here too, the members of Christendom's triad fail the test of Godhood as determined by Scripture.

Genesis17:1ASV"And when Abram was ninety years old and nine, Jehovah appeared to Abram, and said unto him, I am God Almighty; walk before me, and be thou perfect."

The declaration that JEHOVAH is the almighty God does not merely suggest that the Lord JEHOVAH is mightier than any other but that he is mightier than all others combined. Indeed he is bottomless reservoir of potential energy.

Isaiah40:28ASV"Hast thou not known? hast thou not heard? The everlasting God, JEHOVAH, the Creator of the ends of the earth, fainteth not, neither is weary; there is no searching of his understanding."

If their are in fact two(or is it three) others as mighty as one then one is clearly not the mightiest thus we are forced to give the members of Christendom's triad another fail in the test of true Godhood.

The 'R' word?


Yet more evidence that I.D is already mainstream.

 Carl Sagan: “An Intelligence That Antedates the Universe”

Paul Nelson

The late astronomer and science popularizer Carl Sagan (1934-1996) is often seen as an exemplar of a certain attitude on the relationship of science and theology: skeptical, anti-religion, pro-naturalism. Abundant evidence supports this view of Sagan, but there are fascinating hints in both his technical and popular writings that Sagan’s understanding of design detection was far subtler and more open-ended than many realize. Like his British contemporary, the astronomer Fred Hoyle (1915-2001), Sagan left evidence that he might well have enjoyed conversations with intelligent design theorists. Such historical counterfactuals are tricky at best, of course, so let’s look at some of the available evidence, and the reader can speculate on her own.


Design Detection in the Galileo Mission

As a scientist on the Galileo interplanetary mission, Sagan designed experiments to be carried on the spacecraft to detect — as a proof-of-principle — the presence of life, but especially intelligent life, on Earth. During Galileo’s December 1990 fly-by of Earth, as the craft was getting a gravitational boost on its way out to the gas giants of the outer Solar System, its instruments indeed detected striking chemical disequilibria in Earth’s atmosphere, best explained by the presence of organisms.


But it was Galileo’s detection of “narrow-band, pulsed, amplitude-modulated radio transmissions” that seized the brass ring of design detection — where “design” means a pattern or event caused by an intelligence (with a mind), not a physical or chemical process. Sagan and colleagues (1993: 720) wrote:


The fact that the central frequencies of these signals remain constant over periods of hours strongly suggests an artificial origin. Naturally generated radio emissions almost always display significant long-term frequency drifts. Even more definitive is the existence of pulse-like amplitude modulations…such modulation patterns are never observed for naturally occurring radio emissions and implies the transmission of information. [Emphasis added.]


Only someone who conceived of “intelligence” as a kind of cause with unique and detectable indicia would bother setting up this proof-of-principle experiment. But it’s the evidence from Sagan’s popular writings that is especially provocative.


Design Detection in Sagan’s Novel Contact

The last chapter (24) of Sagan’s novel Contact (1985; later made into a film [1997] starring Jodie Foster) is an unmistakable example of number mysticism and design detection, using pi — the mathematical constant and irrational number expressing the ratio between the circumference of any circle and its diameter. Entitled “The Artist’s Signature,” the chapter opens with two epigraphs, as follows:


Behold, I tell you a mystery; we shall not all sleep, but we shall all be changed. 


1 COR. 15:51

The universe seems…to have been determined and ordered in accordance with the creator of all things; for the pattern was fixed, like a preliminary sketch, by the determination of number pre-existent in the mind of the world-creating God.


NICOMACHUS OF GERASA, ARITHMETIC I, 6 (CA. AD 100)

This passage, from the very end of the chapter — and the book — bears quoting. Sagan places the whole section in italics for emphasis:


The universe was made on purpose, the circle said…As long as you live in this universe, and have a modest talent for mathematics, sooner or later you’ll find it. It’s already here. It’s inside everything. You don’t have to leave your planet to find it. In the fabric of space and the nature of matter, as in a great work of art, there is, written small, the artist’s signature. Standing over humans, gods, and demons, subsuming Caretakers and Tunnel builders, there is an intelligence that antedates the universe. [Emphasis added.]


Design’s Narrative Power

Of course, Contact is a novel, not a scientific or philosophical treatise. Sagan was writing for drama (Contact actually started out as a movie treatment in 1980-81). But rather like his contemporaries Arthur C. Clarke and Stanley Kubrick, Sagan loved to play around with concepts of design detection and non-human intelligence. Their narrative power was undeniable.


And that sentence — “there is an intelligence that antedates the universe” — come on, that’s being deliberately provocative. In any case, mathematical objects such as pi, or prime numbers, have long held a special status as design indicia. The atheist radio astronomer and SETI researcher Jill Tarter, the real-life model for the Elli Arroway / Jodi Foster character in Contact, has said that she would regard the decimal expansion of pi, if detected by a radio telescope, as a gold-standard indicator of extraterrestrial intelligence.


Sagan and Intelligent Design

In 1985, when Contact was first published, intelligent design as an intellectual position was largely confined to the edges of academic philosophy, in the work of people such as the Canadian philosopher John Leslie, and a few hardy souls in the neighborhood of books like Thaxton, Bradley, and Olsen, The Mystery of Life’s Origin (1984).


So Sagan (and Fred Hoyle, whose sci-fi novel The Black Cloud was credited by Richard Dawkins as the book having the greatest influence on him; the story opens with a design inference) could afford to play with notions of design detection, non-human intelligences, and the like. These ideas, which are exciting and full of fascinating implications, posed little risk to the dominance of naturalism in science. Detecting non-human intelligence made for good sci-fi.


When ID appeared to become a real cultural threat, however — as it did starting in the mid 1990s in the United States — the dynamic shifted. Still, while Sagan was anti-religious, he was decidedly not anti-design, in the generic sense of the detectability of intelligent causation as a mode distinct from ordinary physical causation. In any case, he died in 1996, and therefore missed the coming high points of the ID debate. Others took up the skeptical mantle, to make sure that design never found a footing in science proper.


As boundary-pushers, both Sagan and Hoyle caught plenty of flak during their lifetimes. Sagan, for instance, was never elected to the National Academy of Sciences. Both paid a price for their popularity and willingness to write novels toying with non-human intelligences. It is interesting, then, to wonder how Sagan would have responded to ID, as articulated by Michael Behe, William Dembski, Stephen Meyer, etc., and how he might have separated his own views from it.


Historical counterfactuals are a playground. Play fairly, and share the equipment.


molecular clocks to the rescue?

 Molecular Clocks Can’t Save Darwinists from the Cambrian Dilemma

David Coppedge

To explain away the Cambrian explosion has been and remains a high priority for Darwinists. Current Biology published one such attempt. On reading certain parts, you might think the authors, including Maximilian Telford, Philip Donoghue, and Ziheng Yang, have solved the problem. Indeed, their first Highlight in the paper summary claims, “Molecular clock analysis indicates an ancient origin of animals in the Cryogenian.” (Cryogenian refers to the Precambrian “cold birth” era about 720 to 635 million years ago.) By itself that statement would be misleading, because the title of the open-access paper is pessimistic: “Uncertainty in the Timing of Origin of Animals and the Limits of Precision in Molecular Timescales.”


Yang appeared briefly in Stephen Meyer’s book Darwin’s Doubt with bad news. Meyer cited a paper Yang co-authored with Aris-Brosou in 2011 showing that molecular clock analyses are unreliable. They “found that depending on which genes and which estimation methods were employed, the last common ancestor of protostomes or deuterostomes (two broadly different types of Cambrian animals) might have lived anywhere between 452 million years and 2 billion years ago” (Meyer, p. 106). 


Nothing has changed since then. The bottom line after a lot of wrangling with numbers, strategies, and analyses is that all current methods of dating the ancestors of the Cambrian animals from molecular clocks are imprecise and uncertain. They cannot be trusted to diffuse the explosion by rooting the animal ancestors earlier in the Precambrian.


Although a Cryogenian origin of crown Metazoa agrees with current geological interpretations, the divergence dates of the bilaterians remain controversial. Thus, attempts to build evolutionary narratives of early animal evolution based on molecular clock timescales appear to be premature. [Emphasis added.]


Check Out the Euphemisms

Translated into plain English, that means, “We can’t tell our favorite evolutionary story because the clock is broken, but we’re working on it.”


In the paper, they provide an analysis of molecular clock data. It’s clear they believe that all the data place the root of the divergence in the Ediacaran or earlier, 100 million years or more before the Cambrian, but can they really defend their belief? They have to admit severe empirical limits:


Here we use an unprecedented amount of molecular data, combined with four fossil calibration strategies (reflecting disparate and controversial interpretations of the metazoan fossil record) to obtain Bayesian estimates of metazoan divergence times. Our results indicate that the uncertain nature of ancient fossils and violations of the molecular clock impose a limit on the precision that can be achieved in estimates of ancient molecular timescales.


Perhaps, a defender might interrupt, the precision, admittedly limited, is good enough. But then, there are those pesky fossils! The molecular clocks are fuzzily in agreement about ancestors in the Precambrian, but none of them has support from the very best observational evidence: the record of the rocks. Even the phyla claimed to exist before the explosion are contested:


Unequivocal fossil evidence of animals is limited to the Phanerozoic [i.e., the modern eon from Cambrian to recent, where animals are plentiful]. Older records of animals are controversial: organic biomarkers indicative of demosponges are apparently derived ultimately from now symbiotic bacteria; putative animal embryo fossils are alternately interpreted as protists; and contested reports of sponges, molluscs, and innumerable cnidarians, as well as putative traces of eumetazoan or bilaterian grade animals, all from the Ediacaran. Certainly, there are no unequivocal records of crown-group bilaterians prior to the Cambrian, and robust evidence for bilaterian phyla does not occur until some 20 million years into the Cambrian.


This severely limits their ability to “calibrate” the molecular clock. Meyer granted the possible existence of three Precambrian phyla (sponges, molluscs, and cnidarians). But there are twenty other phyla that make their first appearance in the Cambrian, many of them far more complex than sponges. What good are the molecular methods if you can’t see any of the ancestors in the rocks?


Missing Ancestors

The authors admit that the Precambrian strata were capable of preserving the ancestors if they existed. 


No matter how imprecise, our timescale for metazoan diversification still indicates a mismatch between the fossil evidence used to calibrate the molecular clock analyses and the resulting divergence time estimates. This is not altogether surprising since, by definition, minimum constraints of clade ages anticipate their antiquity. Nevertheless, it is the extent of this prehistory that is surprising, particularly since the conditions required for exceptional fossil preservation, so key to evidencing the existence of animal phyla in the early Cambrian, obtained also in the Ediacaran.


The only way they can maintain their belief that the ancestors are way back earlier is to discount the fossil evidence as “negative evidence” and to put their trust in the molecular evidence. But how can they trust it, when the answers vary all over the place, depending on the methods used? One clever method is called “rate variation.” Would you trust a clock that has a variable rate? How about one fast-ticking clock for one animal, and a slow-ticking clock for another? 


When rate variation across a phylogeny is extreme (that is, when the molecular clock is seriously violated), the rates calculated on one part of the phylogeny will serve as a poor proxy for estimating divergence times in other parts of the tree. In such instances, divergence time estimation is challenging and the analysis becomes sensitive to the rate model used.


They try their trees with steady rates and with varying rates (“relaxed clock models” — amusing term). They try data partitioning. They try Bayesian analysis. None of them agree. Meyer discussed molecular clock problems in detail in Chapter 5 of Darwin’s Doubt. There’s nothing new here. “Here we show that the precision of molecular clock estimates of times has been grossly over-estimated,” they conclude. “….An evolutionary timescale for metazoan diversification that accommodates these uncertainties has precision that is insufficient to discriminate among causal hypotheses.” In the end, these evolutionists have to admit that fossils would be much, much better:


Above all, establishing unequivocal evidence for the presence of metazoan clades in the late Neoproterozoic, as well as for the absence in more ancient strata, will probably have more impact than any methodological advance in improving the accuracy and precision of divergence time estimates for deep metazoan phylogeny. Realizing the aim of a timescale of early animal evolution that is not merely accurate, but sufficiently precise to effect tests of hypotheses on the causes and consequences of early animal evolution, will require improved models of trait evolution and improved algorithms to allow analysis of genome-scale sequence data in tandem with morphological characters.


Wait a Minute

Isn’t that what Darwin provided — a model of trait evolution? Wasn’t it natural selection of gradual variations? Let’s parse this interesting quote that mentions Darwin:


The timing of the emergence of animals has troubled evolutionary biologists at least since Darwin, who was sufficiently incredulous that he considered the abrupt appearance of animal fossils in the Cambrian as a challenge to his theory of evolution by natural selection. There has been, as a result, a long history of attempts to rationalize a rapid radiation of animals through theories of non-uniform evolutionary processes, such as homeotic mutations, removal of environmental restrictions on larger body sizes, through to the assembly of gene regulation kernels — proposed both as an explanation for rapid rates of innovation followed by subsequent constraint against fundamental innovation of new body plans after the Cambrian. Indeed, there have been explicit attempts to accommodate rapid rates of phenotypic evolution in the early Cambrian, compatible with these hypotheses and a semi-literal (albeit phylogenetically constrained) reading of the fossil record.


And yet our results, as have others before them, suggest that there is no justification for invoking non-uniform mechanisms to explain the emergence of animals and their phylum-level body plans.


That phrase “semi-literal (albeit phylogenetically constrained) reading of the fossil record” is curious. How else are you supposed to read it? They are saying that you have to read the fossil record with Darwin-colored glasses to see it correctly. 


But they’re trying to have it both ways. They want a slow-and-gradual fuse leading up to the Cambrian explosion (disliking “non-uniform evolutionary processes”), which requires a non-literal reading of the fossil record with Darwin glasses on, but they can’t take the molecular data literally either, because it is so method-dependent. You can almost hear them crying out for fossils. As Meyer’s book shows, the fossil record is more explosive now than it was in Darwin’s time.


The Information Enigma Again

Notice how they mention “the emergence of animals and their phylum-level body plans.” How do you get the information to build a phylum-level body plan? Once again, these authors ignore the information issue completely. They say, “Much of the molecular genetic toolkit required for animal development originated deep in eukaryote evolutionary history,” skirting past that with a lateral reference to a paper about a microbe that had no animal body plan. Talk of “emergence” just doesn’t cut it. What is the source of the information to build an animal body plan composed of multiple new cell types and tissues, with 3-D organization and integrated systems like sensory organisms, locomotion, and digestive tracts? Is there an evolutionist who will please answer Meyer’s primary challenge? 

As we’ve seen over and over again, many Darwinian evolutionists think they have done their job if they can just push the ancestry back in time. The fossil record doesn’t allow it, but even if it did, it wouldn’t solve the information problem. Calling it “emergence” is unsatisfactory. Calling it “innovation” is unsatisfactory. Calling it latent potential waiting for environmental factors like heat or oxygen is unsatisfactory. Answer the question: what is the source of the information to build twenty new animal body plans that appeared suddenly in the Cambrian without ancestors? We have an answer: intelligence. What’s yours?

Reductive materialism fails to account for mind.

 Can Self-Organization Theory Account for Consciousness?

Evolution News @DiscoveryCSC

Cognitive neuroscientist Bobby Azarian, author of The Romance of Reality: How the Universe Organizes Itself to Create Life, Consciousness, and Cosmic Complexity (2022), offers a self-organization theory approach to the reality of the mind:


Most neuroscientists believe that consciousness arises when harmonized global activity emerges from the coordinated interactions of billions of neurons. This is because the synchronized firing of brain cells integrates information from multiple processing streams into a unified field of experience. This global activity is made possible by loops in the form of feedback. When feedback is present in a system, it means there is some form of self-reference at work, and in nervous systems, it can be a sign of self-modeling. Feedback loops running from one brain region to another integrate information and bind features into a cohesive perceptual landscape.


When does the light of subjective experience go out? When the feedback loops cease, because it is these loops that harmonize neural activity and bring about the global integration of information. When feedback is disrupted, the brain still keeps on ticking, functioning physiologically and controlling involuntary functions, but consciousness dissolves. The mental model is still embedded in the brain’s architecture, but the observer fades as the self-referential process of real-time self-modeling ceases to produce a “self.” 


BOBBY AZARIAN, “THE MIND IS MORE THAN A MACHINE” AT NOEMA (JUNE 9, 2022)

One difficulty that arises is that many human beings produce a “self” with split brains, a brain missing key components, or only half a brain, (or maybe less). That’s real but not consistent with the materialist model that Azarian outlines.


“The Missing Puzzle Piece”?

He goes on to say,


Could self-reference be the missing puzzle piece that allows for truly intelligent AIs, and maybe even someday sentient machines? Only time will tell, but Simon DeDeo, a complexity scientist at Carnegie Mellon University and the Santa Fe Institute, seems to think so: “Great progress in physics came from taking relativity seriously. We ought to expect something similar here: Success in the project of general artificial intelligence may require we take seriously the relativity implied by self-reference.”


BOBBY AZARIAN, “THE MIND IS MORE THAN A MACHINE” AT NOEMA (JUNE 9, 2022)

But wait. What’s this about “self”-reference? Machines, as we know them, don’t have a self.


Read the rest at Mind Matters News, published by Discovery Institute’s Bradley Center for Natural and Artificial Intelligence.