Search This Blog

Wednesday 22 June 2022

Darwinism's deafening silence on a plausible path to new organs.

 The Silence of the Evolutionary Biologists

William A. Dembski

I am reviewing Jason Rosenhouse’s new book, The Failures of Mathematical Anti-Evolutionism (Cambridge University Press), serially. For the full series so far, go here.


The Darwinian community has been strikingly unsuccessful in showing how complex biological adaptations evolved, or even how they might have evolved, in terms of detailed step-by-step pathways between different structures performing different functions (pathways that must exist if Darwinian evolution holds). Jason Rosenhouse admits the problem when he says that Darwinians lack “direct evidence” of evolution and must instead depend on “circumstantial evidence.” (pp. 47–48) He elaborates: “As compelling as the circumstantial evidence for evolution is, it would be better to have direct experimental confirmation. Sadly, that is impossible. We have only the one run of evolution on this planet to study, and most of the really cool stuff happened long ago.” (p. 208) How very convenient. 


Design theorists see the lack of direct evidence for Darwinian processes creating all that “cool stuff” — in the ancient past no less — as a problem for Darwinism. Moreover, they are unimpressed with the circumstantial evidence that convinces Darwinists that Darwin got it right. Rosenhouse, for instance, smugly informs his readers that “eye evolution is no longer considered to be especially mysterious.” (p. 54) It’s not that the human eye and the visual cortex with which it is integrated are even remotely well enough understood to underwrite a realistic model of how the human eye might have evolved. The details of eye evolution, if such details even exist, remain utterly mysterious.


A Crude Similarity Metric

Instead, Rosenhouse does the only thing that Darwinists can do when confronted with the eye: point out that eyes of many different complexities exist in nature, relate them according to some crude similarity metric (whether structurally or genetically), and then simply posit that gradual step-by-step evolutionary paths connecting them exist (perhaps by drawing arrows to connect similar eyes). Sure, Darwinists can produce endearing computer models of eye evolution (what two virtual objects can’t be made to evolve into each other on a computer?). And they can look for homologous genes and proteins among differing eyes (big surprise that similar structures may use similar proteins). But eyes have to be built in embryological development, and eyes evolving by Darwinian means need a step-by-step path to get from one to the other. No such details are ever forthcoming. Credulity is the sin of Darwinists.


Intelligent design’s scientific program can thus, at least in part, be viewed as an attempt to unmask Darwinist credulity. The task, accordingly, is to find complex biological systems that convincingly resist a gradual step-by-step evolution. Alternatively, it is to find systems that strongly implicate evolutionary discontinuity with respect to the Darwinian mechanism because their evolution can be seen to require multiple coordinated mutations that cannot be reduced to small mutational steps. Michael Behe’s irreducibly complex molecular machines, such as the bacterial flagellum, described in his 1996 book Darwin’s Black Box, provided a rich set of examples for such evolutionary discontinuity. By definition, a system is irreducibly complex if it has core components for which the removal of any of them causes it to lose its original function.


No Plausible Pathways

Interestingly, in the two and a half decades since Behe published that book, no convincing, or even plausible, detailed Darwinian pathways have been put forward to explain the evolution of these irreducibly complex systems. The silence of evolutionary biologists in laying out such pathways is complete. Which is not to say that they are silent on this topic. Darwinian biologists continue to proclaim that irreducibly complex biochemical systems like the bacterial flagellum have evolved and that intelligent design is wrong to regard them as designed. But such talk lacks scientific substance.


Next, “From Darwinists, a Shift in Tone on Nanomachines.”


Editor’s note: This review is cross-posted with permission of the author from BillDembski.com.

For Darwinism humor is no laughing matter.

 There’s Nothing Funny About Evolution

Geoffrey Simmons


Much like the genetic blueprints given to each of us at conception, blueprints for pumping blood, exchanging carbon dioxide for oxygen, digesting food, eliminating food, and retaining memories, we come with a built-in sense of humor. Could our sense of humor have evolved, meaning come about by millions of tiny, modifying, successive steps over millions of years? Or, did it arrive in one lump sum, by design? There are good reasons to suspect the latter.  But first some background musings.


For one thing, genetic studies suggest those folks with a better sense of humor have a shorter allele of gene 5-HTTLPR. In addition, we know there are many physiological benefits to laughter. Oxygenation is increased, cardiac function is improved, stress hormones, such as cortisol and adrenaline, are reduced, the immune system is charged up, and the dopaminergic system, which fights depression, is strengthened.


Norman Cousins, a past Adjunct Professor at UCLA, in his book Anatomy of an Illness as Perceived by the Patient, and in an article in The New England Journal of Medicine, wrote about how he lowered his pain levels from ankylosing spondylitis, from a 10 to a 2. Ten minutes of laughter gave him two hours of pain-free sleep. Much of this laughter came from watching TV. Nowadays, if one is over 13 years old, one might need to find a different medium.


We’re told that laughing 100 times is equal to 10 minutes on a rowing machine or 15 minutes on an exercise bike. Perhaps one could frequent a comedy club nightly and skip those painful, daily exercises. Humor helps us when times are stressful, when we’re courting, and when we’re depressed. Students enjoy their teachers, pay more attention, and remember more information when humor is added to classroom instruction. Humor promotes better bonding between student and teacher, and between most couples. It also helps with hostage negotiations.


A Darwinian Scenario

If our sense of humor came about by tiny steps, like other functions, as proposed by Charles Darwin, scientists have yet to find proof of it. Think of it: can hearing the beginning words of a joke even be funny? Is there any benefit to survival with one-word jokes that eventually become two- and three-word jokes? I, doubt it, but that’s just my personal opinion. 


Fish talk by means of gestures, electrical impulses, bioluminescence, and sounds like hard-to-hear purrs, croaks, and pops. But, did they (or could they) bring their jokes ashore millions of years ago? Of course, there’s no evidence of that. Yet? Just maybe one might envision the fish remaining in the water teasing the more adventuresome fish about their ooohs and aahs, issued while walking across burning-hot sands. 


Tickling a Rat

Laughing while being tickled is not the same as having a sense of humor. The response to someone reaching into one’s armpit is a neurological and physiological reaction to being touched. For some, tickling is torture. I had one rather serious female patient, who, when undressed and covered with a sheet, was ticklish from her neck to her toes. She was nearly impossible to examine. Sometimes she would start laughing as I approached her.


One can tickle a rat, and given the right equipment, record odd utterances that might be laughter. But it might easily be profanity. Some say one can tickle a sting ray, but others say the animal is suffocating. Attempts to tickle a crocodile and other wild animals have not been conducted, as far as I’m aware, in any depth. Also, such attempts are not recommended.


Laughing is clearly part of the human package, part of our design. As I see it, there can only be two possible origins. Humor evolved very, very slowly, or it came about more quickly by intelligent design. Negative feedback loops might argue against the slow development. Some fringe thinkers might speculate that extraterrestrials passed on their sense of humor to us, millions of years ago, but, if so, jokes about the folks in the Andromeda galaxy are on a different wavelength. Jokes about Uranus, of course, are local.


Sorry About that Last One, Folks

A sense of humor varies from person to person, much like height, weight, and abdominal girth. Plus, there are gender differences. Women like men who make them laugh; men like women who laugh at their jokes. Comedians say a sense of humor is a mating signal indicating high intelligence. People on Internet dating sites often ask each other about their sense of humor. Of course, we all have great senses of humor. Just ask anyone.


A sense of humor is often highly valued. Couples get along better when they have similar senses of humor. Mutation is more likely to ruin a good joke than help it. A serious mutation might take out the entire punchline. Jokes about a partner’s looks or clothes are to be avoided. They might lead to domestic abuse. Happy tears are chemically different from sad tears. Both are different from the tears that cleanse the eye with each blink or react to infections. Can anyone explain that? Could specific tears have come about by accident?


We know laughing is a normal human activity. Some days are better than others. Human babies often smile and giggle before they are two months old, years before they will understand a good riddle. Deaf and blind babies smile and giggle at virtually that same age. Is that present to make them more lovable? Children laugh up to 400 times a day, adults only 15 times per day. This could mean we need to hear many more jokes on a daily basis.


What Humor Means

 We all think we know what humor means, but because it can vary among people, we really don’t. An amusing joke told man-to-man might be a nasty joke if told man-to-woman. Or, the other way around. Humor tends to be intangible. It’s somewhat like certain foods tasting good to you, but maybe not to me. Too salty versus needs more salt? Or sweetener? I once told my medical partner that my wife and I had just seen the funniest movie we had ever seen. He and his wife went out that very night to see it and didn’t find anything in it funny. Nothing at all! Not even the funniest scene I have ever seen in a movie. Go figure. 


What does having a good sense of humor mean? Might it be reciting a lot of relevant jokes from a repository, making up funny quips during conversations, or laughing a lot at most anything except someone else’s pain? Or a mix?


There’s a laughter-like sound that is made by chimps, bonobos, and gorillas while playing. But does it mean there’s a sense of humor at work, or monkey profanity? They might be calling each other bad names. Octopuses play but don’t smile orlaugh, we think. Dolphins “giggle” using different combinations of whistles and clicks. It does seem like they are laughing at times, but nobody knows for sure. Maybe it’s just a case of anthropomorphizing. The dolphin family has been around approximately 11 million years and the area of their brain that processes language is much larger than ours. They’ve had plenty of time to come up with several good ones.


Koko the Humorous Gorilla

Perhaps, the most interesting case was Koko the gorilla who was taught to sign. She recently died after 46 years. Her vocabulary was at least 1,000 words by signing and another 2,000 words by hearing. Some say she was a jokester. She loved Robin Williams. Maybe adored him. The two would play together for hours. Koko seemed to make up jokes. She once tore the sink out of the wall in her cage; when asked about it, she signed that her pet cat did it. However, the cat wasn’t tall enough.


 So I ask again, could a sense of humor have come about by numerous, successive, slight modifications, a Darwinian requirement? If humor fails that test, might humor be the elusive coup de grace for naturalism? Since irreducible complexity, specified complexity, and topoisomerases haven’t landed the KO to Darwin’s weakening theories, might the answer just be as simple as laughing at them?


If a sense of humor were just a variation on tickling, my guess is that comedians would come off the stage or hire teenagers to walk among their audiences to tickle everyone. Imagine being dressed up for the night, maybe eating a fancy meal or drinking expensive champagne, and some grubby kid, who’s paid minimum wage, is reaching into your armpits.


Why Laugh at All? 

Is a sense of humor a byproduct, an accident, or was it installed on purpose? For better health? There definitely seems to be a purpose. Could it be a coping mechanism? Is it the way to meet the right mate? Surely, that must be part of it.


The only evolution-related quip I could think of sums up this discussion rather well:


A little girl asked her mother, “How did the human race come about?”


The mother answered, “God made Adam and Eve. They had children, and so all mankind was made.”


A few days later, the little girl asked her father the same question. The father answered, “Many years ago there were apelike creatures, and we developed from them.”


The confused girl returned to her mother and said, “Mom, how is it possible that you told me that the human race was created by God , and Papa says we developed from ‘apelike creatures’?”


The mother answered, “Well, dear, it is very simple. I told you about the origin of my side of the family, and your father told you about his.”

Man does not compute?

 The Non-Computable Human

Robert J. Marks II


Editor’s note: We are delighted to present an excerpt from Chapter 1 of the new book Non-Computable You: What You Do that Artificial Intelligence Never Will, by computer engineer Robert J. Marks, director of Discovery Institute’s Bradley Center for Natural and Artificial Intelligence.


If you memorized all of Wikipedia, would you be more intelligent? It depends on how you define intelligence. 


Consider John Jay Osborn Jr.’s 1971 novel The Paper Chase. In this semi-autobiographical story about Harvard Law School, students are deathly afraid of Professor Kingsfield’s course on contract law. Kingfield’s classroom presence elicits both awe and fear. He is the all-knowing professor with the power to make or break every student. He is demanding, uncompromising, and scary smart. In the iconic film adaptation, Kingsfield walks into the room on the first day of class, puts his notes down, turns toward his students, and looms threateningly.


“You come in here with a skull full of mush,” he says. “You leave thinking like a lawyer.” Kingsfield is promising to teach his students to be intelligent like he is. 


One of the law students in Kingsfield’s class, Kevin Brooks, is gifted with a photographic memory. He can read complicated case law and, after one reading, recite it word for word. Quite an asset, right?


Not necessarily. Brooks has a host of facts at his fingertips, but he doesn’t have the analytic skills to use those facts in any meaningful way.


Kevin Brooks’s wife is supportive of his efforts at school, and so are his classmates. But this doesn’t help. A tutor doesn’t help. Although he tries, Brooks simply does not have what it takes to put his phenomenal memorization skills to effective use in Kingsfield’s class. Brooks holds in his hands a million facts that because of his lack of understanding are essentially useless. He flounders in his academic endeavor. He becomes despondent. Eventually he attempts suicide. 


Knowledge and Intelligence

This sad tale highlights the difference between knowledge and intelligence. Kevin Brooks’s brain stored every jot and tittle of every legal case assigned by Kingsfield, but he couldn’t apply the information meaningfully. Memorization of a lot of knowledge did not make Brooks intelligent in the way that Kingsfield and the successful students were intelligent. British journalist Miles Kington captured this distinction when he said, “Knowing a tomato is a fruit is knowledge. Intelligence is knowing not to include it in a fruit salad.”


Which brings us to the point: When discussing artificial intelligence, it’s crucial to define intelligence. Like Kevin Brooks, computers can store oceans of facts and correlations; but intelligence requires more than facts. True intelligence requires a host of analytic skills. It requires understanding; the ability to recognize humor, subtleties of meaning, and symbolism; and the ability to recognize and disentangle ambiguities. It requires creativity.


Artificial intelligence has done many remarkable things. AI has largely replaced travel agents, tollbooth attendants, and mapmakers. But will AI ever replace attorneys, physicians, military strategists, and design engineers, among others?


The answer is no. And the reason is that as impressive as artificial intelligence is — and make no mistake, it is fantastically impressive — it doesn’t hold a candle to human intelligence. It doesn’t hold a candle to you.


And it never will. How do we know? The answer can be stated in a single four-syllable word that needs unpacking before we can contemplate the non-computable you. That word is algorithm. If not expressible as an algorithm, a task is not computable.


Algorithms and the Computable

An algorithm is a step-by-step set of instructions to accomplish a task. A recipe for German chocolate cake is an algorithm. The list of ingredients acts as the input for the algorithm; mixing the ingredients and following the baking and icing instructions will result in a cake.


Likewise, when I give instructions to get to my house, I am offering an algorithm to follow. You are told how far to go and which direction you are to turn on what street. When Google Maps returns a route to go to your destination, it is giving you an algorithm to follow. 


Humans are used to thinking in terms of algorithms. We make grocery lists, we go through the morning procedure of showering, hair combing, teeth brushing, and we keep a schedule of what to do today. Routine is algorithmic. Engineers algorithmically apply Newton’s laws of physics when designing highway bridges and airplanes. Construction plans captured on blueprints are part of an algorithm for building. Likewise, chemical reactions follow algorithms discovered by chemists. And all mathematical proofs are algorithmic; they follow step-by-step procedures built on the foundations of logic and axiomatic presuppositions. 


Algorithms need not be fixed; they can contain stochastic elements, such as descriptions of random events in population genetics and weather forecasting. The board game Monopoly, for example, follows a fixed set of rules, but the game unfolds through random dice throws and player decisions.


Here’s the key: Computers only do what they’re programmed by humans to do, and those programs are all algorithms — step-by-step procedures contributing to the performance of some task. But algorithms are limited in what they can do. That means computers, limited to following algorithmic software, are limited in what they can do.


This limitation is captured by the very word “computer.” In the world of programmers, “algorithmic” and “computable” are often used interchangeably. And since “algorithmic” and “computable” are synonyms, so are “non-computable” and “non-algorithmic.”


Basically, for computers — for artificial intelligence — there’s no other game in town. All computer programs are algorithms; anything non-algorithmic is non-computable and beyond the reach of AI.


But it’s not beyond you. 


Non-Computable You

Humans can behave and respond non-algorithmically. You do so every day. For example, you perform a non-algorithmic task when you bite into a lemon. The lemon juice squirts on your tongue and you wince at the sour flavor. 


Now, consider this: Can you fully convey your experience to a man who was born with no sense of taste or smell? No. You cannot. The goal is not a description of the lemon-biting experience, but its duplication. The lemon’s chemicals and the mechanics of the bite can be described to the man, but the true experience of the lemon taste and aroma cannot be conveyed to someone without the necessary senses.


If biting into a lemon cannot be explained to a man without all his functioning senses, it certainly can’t be duplicated in an experiential way by AI using computer software. Like the man born with no sense of taste or smell, machines do not possess qualia — experientially sensory perceptions such as pain, taste, and smell. 


Qualia are a simple example of the many human attributes that escape algorithmic description. If you can’t formulate an algorithm explaining your lemon-biting experience, you can’t write software to duplicate the experience in the computer.


Or consider another example. I broke my wrist a few years ago, and the physician in the emergency room had to set the broken bones. I’d heard beforehand that bone-setting really hurts. But hearing about pain and experiencing pain are quite different. 


To set my broken wrist, the emergency physician grabbed my hand and arm, pulled, and there was an audible crunching sound as the bones around my wrist realigned. It hurt. A lot. I envied my preteen grandson, who had been anesthetized when his broken leg was set. He slept through his pain.


Is it possible to write a computer program to duplicate — not describe, but duplicate — my pain? No. Qualia are not computable. They’re non-algorithmic.


By definition and in practice, computers function using algorithms. Logically speaking, then, the existence of the non-algorithmic suggests there are limits to what computers and therefore AI can do.

Darwinists attempt to correct God again.

 From Darwinists, a Shift in Tone on Nanomachines

William A. Dembski


I am reviewing Jason Rosenhouse’s new book, The Failures of Mathematical Anti-Evolutionism (Cambridge University Press), serially. For the full series so far, go here.


Unfortunately for Darwinists, irreducible complexity raises real doubts about Darwinism in people’s minds. Something must be done. Rising to the challenge, Darwinists are doing what must be done to control the damage. Take the bacterial flagellum, the poster child of irreducibly complex biochemical machines. Whatever biologists may have thought of its ultimate origins, they tended to regard it with awe. Harvard’s Howard Berg, who discovered that flagellar filaments rotate to propel bacteria through their watery environments, would in public lectures refer to the flagellum as “the most efficient machine in the universe.” (And yes, I realize there are many different bacteria sporting many different variants of the flagellum, including the souped-up hyperdrive magnetotactic bacteria, which swim ten times faster than E. coli — E. coli’s flagellum, however, seems to be the one most studied.)

Why “Machines”?

In 1998, writing for a special issue of Cell, the National Academy of Sciences president at the time, Bruce Alberts, remarked:


We have always underestimated cells… The entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of a set of large protein machines… Why do we call the large protein assemblies that underlie cell function protein machines? Precisely because, like machines invented by humans to deal efficiently with the macroscopic world, these protein assemblies contain highly coordinated moving parts. [Emphasis in the original.]


A few years later, in 2003, Adam Watkins, introducing a special issue on nanomachines for BioEssays, wrote: 


The articles included in this issue demonstrate some striking parallels between artifactual and biological/molecular machines. In the first place, molecular machines, like man-made machines, perform highly specific functions. Second, the macromolecular machine complexes feature multiple parts that interact in distinct and precise ways, with defined inputs and outputs. Third, many of these machines have parts that can be used in other molecular machines (at least, with slight modification), comparable to the interchangeable parts of artificial machines. Finally, and not least, they have the cardinal attribute of machines: they all convert energy into some form of ‘work’.


Neither of these special issues offered detailed step-by-step Darwinian pathways for how these machine-like biological systems might have evolved, but they did talk up their design characteristics. I belabor these systems and the special treatment they received in these journals because none of the mystery surrounding their origin has in the intervening years been dispelled. Nonetheless, the admiration that they used to inspire has diminished. Consider the following quote about the flagellum from Beeby et al.’s 2020 article on propulsive nanomachines. Rosenhouse cites it approvingly, prefacing the quote by claiming that the flagellum is “not the handiwork of a master engineer, but is more like a cobbled-together mess of kludges” (pp. 151–152):


Many functions of the three propulsive nanomachines are precarious, over-engineered contraptions, such as the flagellar switch to filament assembly when the hook reaches a pre-determined length, requiring secretion of proteins that inhibit transcription of filament components. Other examples of absurd complexity include crude attachment of part of an ancestral ATPase for secretion gate maturation, and the assembly of flagellar filaments at their distal end. All cases are absurd, and yet it is challenging to (intelligently) imagine another solution given the tools (proteins) to hand. Indeed, absurd (or irrational) design appears a hallmark of the evolutionary process of co-option and exaptation that drove evolution of the three propulsive nanomachines, where successive steps into the adjacent possible function space cannot anticipate the subsequent adaptations and exaptations that would then become possible. 


The shift in tone from then to now is remarkable. What happened to the awe these systems used to inspire? Have investigators really learned so much in the intervening years to say, with any confidence, that these systems are indeed over-engineered? To say that something is over-engineered is to say that it could be simplified without loss of function (like a Rube Goldberg device). And what justifies that claim here? Have scientists invented simpler systems that in all potential environments perform as well as or better than the systems in question? Are they able to go into existing flagellar systems, for instance, and swap out the over-engineered parts with these more efficient (sub)systems? Have they in the intervening years gained any real insight into the step-by-step evolution of these systems? Or are they merely engaged in rhetoric to make flagellar motors seem less impressive and thus less plausibly the product of design? To pose these questions is to answer them.


A Quasi-Humean Spirit

Rosenhouse even offers a quasi-Humean anti-design argument. Humans are able to build things like automobiles, but not things like organisms. Accordingly, ascribing design to organisms is an “extravagant extrapolation” from “causes now in operation.” Rosenhouse’s punchline: “Based on our experience, or on comparisons of human engineering to the natural world, the obvious conclusion is that intelligence cannot at all do what they [i.e., ID proponents] claim it can do. Not even close. Their argument is no better than saying that since moles are seen to make molehills, mountains must be evidence for giant moles.” (p. 273) 


Seriously?! As Richard Dawkins has been wont to say, “This is a transparently feeble argument.” So, primitive humans living with stone-age technology, if they were suddenly transported to Dubai, would be unable to get up to speed and recognize design in the technologies on display there? Likewise, we, confronted with space aliens whose technologies can build organisms using ultra-advanced 3D printers, would be unable to recognize that they were building designed objects? I intend these statements as rhetorical questions whose answer is obvious. What underwrites our causal explanations is our exposure to and understanding of the types of causes now in operation, not the idiosyncrasies of their operation. Because we are designers, we can appreciate design even if we are unable to replicate the design ourselves. Lost arts are lost because we are unable to replicate the design, not because we are unable to recognize the design. Rosenhouse’s quasi-Humean anti-design argument is ridiculous.


Next, “Darwinist Turns Math Cop: Track 1 and Track 2.”


Editor’s note: This review is cross-posted with permission of the author from BillDembski.com.

Tuesday 21 June 2022

The enemy of my enemy..?


At the house next door: No one's home?

 New Analysis Casts Doubt on Claims for Life on Venus

Evolution News @DiscoveryCSC


A new study throws cold water (vapor?) on an earlier paper that suggested that aerial life forms could exist in Venus’s massive cloud cover:


Researchers from the University of Cambridge used a combination of biochemistry and atmospheric chemistry to test the ‘life in the clouds’ hypothesis, which astronomers have speculated about for decades, and found that life cannot explain the composition of the Venusian atmosphere.


Any life form in sufficient abundance is expected to leave chemical fingerprints on a planet’s atmosphere as it consumes food and expels waste. However, the Cambridge researchers found no evidence of these fingerprints on Venus. 


UNIVERSITY OF CAMBRIDGE, “NO SIGNS (YET) OF LIFE ON VENUS” AT SCIENCE DAILY (JUNE 14, 2022) THE PAPER IS OPEN ACCESS.

The contention in the earlier paper was that chemicals present in Venus’s clouds are consistent with production by life forms.


Not a Biosignature

Although the authors of the study published last week, Jordan Chortle and P. B. Rimmer, say that the specifics of Venus’s atmospheric chemistry are not a biosignature (evidence of life), they stress that the atmosphere on Venus is nonetheless “strange.”

They hope that their work will assist in identifying other promising sites for extraterrestrial life:


”To understand why some planets are alive, we need to understand why other planets are dead,” said Shorttle. “If life somehow managed to sneak into the Venusian clouds, it would totally change how we search for chemical signs of life on other planets.”


“Even if ‘our’ Venus is dead, it’s possible that Venus-like planets in other systems could host life,” said Rimmer, who is also affiliated with Cambridge’s Cavendish Laboratory. “We can take what we’ve learned here and apply it to exoplanetary systems — this is just the beginning.”

They hope their method of analysis will prove a help later this year when the James Webb Space Telescope starts returning images of planets outside our solar system.


Read the rest at Mind Matters News, published by Discovery Institute’s Bradley Center for Natural and Artificial Intelligence.



Paleo Darwinism V. evolution in general?

 Jason Rosenhouse, a Crude Darwinist

William A. Dembski


I am reviewing Jason Rosenhouse’s new book, The Failures of Mathematical Anti-Evolutionism (Cambridge University Press), serially. For the full series so far, go here.


For Rosenhouse, Darwin can do no wrong and Darwin’s critics can do no right. As a fellow mathematician, I would have liked to see from Rosenhouse a vigorous and insightful discussion of my ideas, especially where there’s room for improvement, as well as some honest admission of why neo-Darwinism falls short as a compelling theory of biological evolution and why mathematical criticisms of it could at least have some traction. Instead, Rosenhouse assumes no burden of proof, treating Darwin’s theory as a slam dunk and treating all mathematical criticisms of Darwin’s theory as laughable. Indeed, he has a fondness for the word “silly,” which he uses repeatedly, and according to him mathematicians who use math to advance intelligent design are as silly as they come.


Anti-Evolutionism or Anti-Darwinism?

In using the phrase “mathematical anti-evolutionism,” Rosenhouse mistitled his book. Given its aim and arguments, it should have been titled The Failures of Mathematical Anti-Darwinism. Although design theorists exist who reject the transformationism inherent in evolutionism (I happen to be one of them), intelligent design’s beef is not with evolution per se but with the supposed naturalistic mechanisms driving evolution. And when it comes to naturalistic mechanisms driving evolution, there’s only one game in town, namely, neo-Darwinism, which I’ll refer to simply as Darwinism. In any case, my colleague Michael Behe, who also comes in for criticism from Rosenhouse, is an evolutionist. Behe accepts common descent, the universal common ancestry of all living things on planet earth. And yet Behe is not a Darwinist — he sees Darwin’s mechanism of natural selection acting on random variations as having at best very limited power to explain biological innovation. 


Reflexive Darwinism

Rosenhouse is a Darwinist, and a crude reflexive one at that. For instance, he will write: “Evolution only cares about brute survival. A successful animal is one that inserts many copies of its genes into the next generation, and one can do that while being not very bright at all.” (p. 14) By contrast, more nuanced Darwinists (like Robert Wright) will stress how Darwinian processes can enhance cooperation. Others (like Geoffrey Miller) will stress how sexual selection can put a premium on intelligence (and thus on “being bright”). But Rosenhouse’s Darwinism plays to the lowest common denominator. Throughout the book, he hammers on the primacy of natural selection and random variation, entirely omitting such factors as symbiosis, gene transfer, genetic drift, the action of regulatory genes in development, to say nothing of self-organizational processes.


Rosenhouse’s Darwinism commits him to Darwinian gradualism: Every adaptation of organisms is the result of a gradual step-by-step evolutionary process with natural selection ensuring the avoidance of missteps along the way. Writing about the evolution of “complex biological adaptations,” he notes: “Either the adaptation can be broken down into small mutational steps or it cannot. Evolutionists say that all adaptations studied to date can be so broken down while anti-evolutionists deny this…” (p. 178) At the same time, Rosenhouse denies that adaptations ever require multiple coordinated mutational steps: “[E]volution will not move a population from point A to point B if multiple, simultaneous mutations are required. No one disagrees with this, but in practice there is no way of showing that multiple, simultaneous mutations are actually required.” (pp. 159–160) 


“Mount Improbable”

And why are multiple simultaneous mutations strictly verboten? Because they would render life’s evolution too improbable, making it effectively impossible for evolution to climb Mount Improbable (which is both a metaphor and the title of a book by Richard Dawkins). Simultaneous mutations throw a wrench in the Darwinian gearbox. If they played a significant role in evolution, Darwinian gradualism would become untenable. Accordingly, Rosenhouse maintains that such large-scale mutational changes never happen and are indemonstrable even if they do happen. Rosenhouse presents this point of view not with a compelling argument, but as an apologist intent on neutralizing intelligent design’s threat to Darwinism. 


Next, “The Silence of the Evolutionary Biologists.”


Editor’s note: This review is cross-posted with permission of the author from BillDembski.com.

It looks like technology because it is?

 Physicist Brian Miller: The Fruitful Marriage of Biology and Engineering

David Klinghoffer


Discovery Institute physicist Brian Miller spoke at the recent Dallas Conference on Science and Faith. His theme was “The Surprising Relevance of Engineering in Biology.” 


Afterward, moderated by John West, he took some very thoughtful questions from the audience. Miller notes the fruitful marriage of biology and engineering, as in, for example, the study of control systems: “What you find is parallel research: that biologists are understanding these systems, engineers independently discover these systems, and when they work together they’re looking at the overlap. So, what’s happening now is engineers are learning from biology to do engineering better.” If biology isn’t designed, which is another way of saying “engineered,” wouldn’t this state of affairs be pretty counterintuitive? Enjoy the rest of the Q&A with Dr. Miller:

<iframe width="770" height="433" src="https://www.youtube.com/embed/TH4Woh9S1ig" title="Brian Miller Answers Questions about the Relevance of Engineering to Biology" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

A peacemaker between mathematics and Darwinism?

 The Challenge from Jason Rosenhouse

William A. Dembski


I am reviewing Jason Rosenhouse’s new book, The Failures of Mathematical Anti-Evolutionism (Cambridge University Press), serially. For the full series so far, go here.


To show readers that he means business and that he is a bold, brave thinker, Rosenhouse lays down the gauntlet: “Anti-evolutionists play well in front of friendly audiences because in that environment the speakers never pay the price of being wrong. The response would be a lot chillier if they tried the same arguments in front of audiences with the relevant expertise. Try telling a roomful of mathematicians that you can refute evolutionary theory with a few back-of-the-envelope probability calculations, and see how far you get.” (Epilogue, pp. 270-271)


I’m happy to take up Rosenhouse’s gauntlet. In fact, I already have. I’ve presented my ideas and arguments to roomfuls of not just mathematicians but also biologists and the whole range of scientists on whose disciplines my work impinges. A case in point is a 2014 talk I gave on conservation of information at the University of Chicago, a talk sponsored by my old physics advisor Leo Kadanoff. The entire talk, including Q&A, is available on YouTube:

In such talks, I present quite a bit more detail than a mere back-of-the-envelope probability calculation, though full details, in a single talk (as opposed to a multi-week seminar), require referring listeners to my work in the peer-reviewed literature (none of which Rosenhouse cites in his book). 


My Challenge to Jason Rosenhouse

If I receive a chilly reception in giving such talks, it’s not for any lack of merit in my ideas or work. Rather, it’s the prejudicial contempt evident in Rosenhouse’s challenge above, which is widely shared among Darwinists, who are widespread in the academy. For instance, Rosenhouse’s comrade in arms, evolutionary biologist Jerry Coyne, who is at the University of Chicago, tried to harass Leo into canceling my 2014 talk, but Leo was not a guy to be intimidated — the talk proceeded as planned (Leo sent me copies of the barrage of emails he received from Coyne to persuade him to uninvite me). For the record, I’m happy to debate Rosenhouse, or any mathematicians, engineers, biologists, or whatever, who think they can refute my work. 


Next, “Jason Rosenhouse, a Crude Darwinist.”


Editor’s note: This review is cross-posted with permission of the author from BillDembski.com.

Wednesday 15 June 2022

Sacrifice without cost?

Chronicles21:24KJV" And king David said to Ornan, Nay; but I will verily buy it for the full price: for I will not take that which is thine for the LORD, nor offer burnt offerings without cost."


King David realized that a cost free sacrifice is in effect no sacrifice at all. Yet is this not the effect that Christendom's theology re: Christ being the God-man and unconditional immortality have on the supposed atonement. Christendom's reductive spiritualism has the effect of rendering the physical body (somos) worse than useless, a prison of rotting flesh that anchors our "real selves" to the ground during our probation on this earth. Surely being liberated from any prison is a blessing and not a sacrifice.


Matthew20:28KJV"Even as the Son of man came not to be ministered unto, but to minister, and to give his life(gk.psyche) a ransom for many." 


Obviously if Christ's real self(soul) was immortal or if he was god-man or both he could not give his soul as a ransom. The mere liberation of his true self from its prison of flesh would constitute no genuine sacrifice. For Christ atonement offering to be genuinely  substitutionary his death would have to be identical in nature to that of the first Adam.


1Corithians15:21KJV"For since by man came death, by man came also the resurrection of the dead. "


And as to nature of the first Adam's death, let's not speculate, but let JEHOVAH'S word be the authority.


Genesis3:19KJV"In the sweat of thy face shalt thou eat bread, till thou RETURN unto the ground; for out of it wast thou taken: for dust thou art, and unto dust shalt thou RETURN."


Thus Adam was to RETURN to his pre-creation state. That is what death meant to Adam. For the second Adam to to serve as a genuine substitute to the first and thus effect an atonement his death MUST have the same significance. 

Why we must give the members of Christendom's trinity a 'fail' in Godhood.

 What is meant by the expression fully God? The bible tells us that there is just one who is autotheos and thus entitled to absolute worship.

1Corinthians8:6NIV"yet for us there is but one God, the Father, from whom all things came..."

John17:3KJV"And this is life eternal, that they might know thee the only true God,..."

Biblical theology tells us that there are four qualities that set the Lord JEHOVAH apart as uniquely qualified to receive absolute worship.

1. He is both necessary and sufficient as the source and sustainer of life and everything required for its flourishing.

2. He is superlative in authority being without equal or even approximate.

3. He is totally immutable.

4. He is omnipotent/omniscient.

Can any member of Christendom's trinity thus be considered fully God in any meaningful sense?

Obviously no member of Christendom's triad can be both necessary and sufficient, as a first cause if any of the three are sufficient as a first cause the other two are made unnecessary and if all three are necessary none are sufficient.

As per the dictionary definition of superlative one can either be superlative or coequal but not both, thus none of Christendom's triad would qualify as superlative.

Malachi3:6ASV"For I, Jehovah, change not; therefore ye, O sons of Jacob, are not consumed."

According to Christendom, JEHOVAH'S plain declaration that he is not subject to even the least change actually means he is subject to infinite change thus he could become a creature subject to death. We reject the fantastic leaps of logic and mental contortions needed to concur with such nonsense. Thus here too, the members of Christendom's triad fail the test of Godhood as determined by Scripture.

Genesis17:1ASV"And when Abram was ninety years old and nine, Jehovah appeared to Abram, and said unto him, I am God Almighty; walk before me, and be thou perfect."

The declaration that JEHOVAH is the almighty God does not merely suggest that the Lord JEHOVAH is mightier than any other but that he is mightier than all others combined. Indeed he is bottomless reservoir of potential energy.

Isaiah40:28ASV"Hast thou not known? hast thou not heard? The everlasting God, JEHOVAH, the Creator of the ends of the earth, fainteth not, neither is weary; there is no searching of his understanding."

If their are in fact two(or is it three) others as mighty as one then one is clearly not the mightiest thus we are forced to give the members of Christendom's triad another fail in the test of true Godhood.

The 'R' word?

 <iframe width="953" height="536" src="https://www.youtube.com/embed/q917Mp3yerE" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

Yet more evidence that I.D is already mainstream.

 Carl Sagan: “An Intelligence That Antedates the Universe”

Paul Nelson

The late astronomer and science popularizer Carl Sagan (1934-1996) is often seen as an exemplar of a certain attitude on the relationship of science and theology: skeptical, anti-religion, pro-naturalism. Abundant evidence supports this view of Sagan, but there are fascinating hints in both his technical and popular writings that Sagan’s understanding of design detection was far subtler and more open-ended than many realize. Like his British contemporary, the astronomer Fred Hoyle (1915-2001), Sagan left evidence that he might well have enjoyed conversations with intelligent design theorists. Such historical counterfactuals are tricky at best, of course, so let’s look at some of the available evidence, and the reader can speculate on her own.


Design Detection in the Galileo Mission

As a scientist on the Galileo interplanetary mission, Sagan designed experiments to be carried on the spacecraft to detect — as a proof-of-principle — the presence of life, but especially intelligent life, on Earth. During Galileo’s December 1990 fly-by of Earth, as the craft was getting a gravitational boost on its way out to the gas giants of the outer Solar System, its instruments indeed detected striking chemical disequilibria in Earth’s atmosphere, best explained by the presence of organisms.


But it was Galileo’s detection of “narrow-band, pulsed, amplitude-modulated radio transmissions” that seized the brass ring of design detection — where “design” means a pattern or event caused by an intelligence (with a mind), not a physical or chemical process. Sagan and colleagues (1993: 720) wrote:


The fact that the central frequencies of these signals remain constant over periods of hours strongly suggests an artificial origin. Naturally generated radio emissions almost always display significant long-term frequency drifts. Even more definitive is the existence of pulse-like amplitude modulations…such modulation patterns are never observed for naturally occurring radio emissions and implies the transmission of information. [Emphasis added.]


Only someone who conceived of “intelligence” as a kind of cause with unique and detectable indicia would bother setting up this proof-of-principle experiment. But it’s the evidence from Sagan’s popular writings that is especially provocative.


Design Detection in Sagan’s Novel Contact

The last chapter (24) of Sagan’s novel Contact (1985; later made into a film [1997] starring Jodie Foster) is an unmistakable example of number mysticism and design detection, using pi — the mathematical constant and irrational number expressing the ratio between the circumference of any circle and its diameter. Entitled “The Artist’s Signature,” the chapter opens with two epigraphs, as follows:


Behold, I tell you a mystery; we shall not all sleep, but we shall all be changed. 


1 COR. 15:51

The universe seems…to have been determined and ordered in accordance with the creator of all things; for the pattern was fixed, like a preliminary sketch, by the determination of number pre-existent in the mind of the world-creating God.


NICOMACHUS OF GERASA, ARITHMETIC I, 6 (CA. AD 100)

This passage, from the very end of the chapter — and the book — bears quoting. Sagan places the whole section in italics for emphasis:


The universe was made on purpose, the circle said…As long as you live in this universe, and have a modest talent for mathematics, sooner or later you’ll find it. It’s already here. It’s inside everything. You don’t have to leave your planet to find it. In the fabric of space and the nature of matter, as in a great work of art, there is, written small, the artist’s signature. Standing over humans, gods, and demons, subsuming Caretakers and Tunnel builders, there is an intelligence that antedates the universe. [Emphasis added.]


Design’s Narrative Power

Of course, Contact is a novel, not a scientific or philosophical treatise. Sagan was writing for drama (Contact actually started out as a movie treatment in 1980-81). But rather like his contemporaries Arthur C. Clarke and Stanley Kubrick, Sagan loved to play around with concepts of design detection and non-human intelligence. Their narrative power was undeniable.


And that sentence — “there is an intelligence that antedates the universe” — come on, that’s being deliberately provocative. In any case, mathematical objects such as pi, or prime numbers, have long held a special status as design indicia. The atheist radio astronomer and SETI researcher Jill Tarter, the real-life model for the Elli Arroway / Jodi Foster character in Contact, has said that she would regard the decimal expansion of pi, if detected by a radio telescope, as a gold-standard indicator of extraterrestrial intelligence.


Sagan and Intelligent Design

In 1985, when Contact was first published, intelligent design as an intellectual position was largely confined to the edges of academic philosophy, in the work of people such as the Canadian philosopher John Leslie, and a few hardy souls in the neighborhood of books like Thaxton, Bradley, and Olsen, The Mystery of Life’s Origin (1984).


So Sagan (and Fred Hoyle, whose sci-fi novel The Black Cloud was credited by Richard Dawkins as the book having the greatest influence on him; the story opens with a design inference) could afford to play with notions of design detection, non-human intelligences, and the like. These ideas, which are exciting and full of fascinating implications, posed little risk to the dominance of naturalism in science. Detecting non-human intelligence made for good sci-fi.


When ID appeared to become a real cultural threat, however — as it did starting in the mid 1990s in the United States — the dynamic shifted. Still, while Sagan was anti-religious, he was decidedly not anti-design, in the generic sense of the detectability of intelligent causation as a mode distinct from ordinary physical causation. In any case, he died in 1996, and therefore missed the coming high points of the ID debate. Others took up the skeptical mantle, to make sure that design never found a footing in science proper.


As boundary-pushers, both Sagan and Hoyle caught plenty of flak during their lifetimes. Sagan, for instance, was never elected to the National Academy of Sciences. Both paid a price for their popularity and willingness to write novels toying with non-human intelligences. It is interesting, then, to wonder how Sagan would have responded to ID, as articulated by Michael Behe, William Dembski, Stephen Meyer, etc., and how he might have separated his own views from it.


Historical counterfactuals are a playground. Play fairly, and share the equipment.


molecular clocks to the rescue?

 Molecular Clocks Can’t Save Darwinists from the Cambrian Dilemma

David Coppedge

To explain away the Cambrian explosion has been and remains a high priority for Darwinists. Current Biology published one such attempt. On reading certain parts, you might think the authors, including Maximilian Telford, Philip Donoghue, and Ziheng Yang, have solved the problem. Indeed, their first Highlight in the paper summary claims, “Molecular clock analysis indicates an ancient origin of animals in the Cryogenian.” (Cryogenian refers to the Precambrian “cold birth” era about 720 to 635 million years ago.) By itself that statement would be misleading, because the title of the open-access paper is pessimistic: “Uncertainty in the Timing of Origin of Animals and the Limits of Precision in Molecular Timescales.”


Yang appeared briefly in Stephen Meyer’s book Darwin’s Doubt with bad news. Meyer cited a paper Yang co-authored with Aris-Brosou in 2011 showing that molecular clock analyses are unreliable. They “found that depending on which genes and which estimation methods were employed, the last common ancestor of protostomes or deuterostomes (two broadly different types of Cambrian animals) might have lived anywhere between 452 million years and 2 billion years ago” (Meyer, p. 106). 


Nothing has changed since then. The bottom line after a lot of wrangling with numbers, strategies, and analyses is that all current methods of dating the ancestors of the Cambrian animals from molecular clocks are imprecise and uncertain. They cannot be trusted to diffuse the explosion by rooting the animal ancestors earlier in the Precambrian.


Although a Cryogenian origin of crown Metazoa agrees with current geological interpretations, the divergence dates of the bilaterians remain controversial. Thus, attempts to build evolutionary narratives of early animal evolution based on molecular clock timescales appear to be premature. [Emphasis added.]


Check Out the Euphemisms

Translated into plain English, that means, “We can’t tell our favorite evolutionary story because the clock is broken, but we’re working on it.”


In the paper, they provide an analysis of molecular clock data. It’s clear they believe that all the data place the root of the divergence in the Ediacaran or earlier, 100 million years or more before the Cambrian, but can they really defend their belief? They have to admit severe empirical limits:


Here we use an unprecedented amount of molecular data, combined with four fossil calibration strategies (reflecting disparate and controversial interpretations of the metazoan fossil record) to obtain Bayesian estimates of metazoan divergence times. Our results indicate that the uncertain nature of ancient fossils and violations of the molecular clock impose a limit on the precision that can be achieved in estimates of ancient molecular timescales.


Perhaps, a defender might interrupt, the precision, admittedly limited, is good enough. But then, there are those pesky fossils! The molecular clocks are fuzzily in agreement about ancestors in the Precambrian, but none of them has support from the very best observational evidence: the record of the rocks. Even the phyla claimed to exist before the explosion are contested:


Unequivocal fossil evidence of animals is limited to the Phanerozoic [i.e., the modern eon from Cambrian to recent, where animals are plentiful]. Older records of animals are controversial: organic biomarkers indicative of demosponges are apparently derived ultimately from now symbiotic bacteria; putative animal embryo fossils are alternately interpreted as protists; and contested reports of sponges, molluscs, and innumerable cnidarians, as well as putative traces of eumetazoan or bilaterian grade animals, all from the Ediacaran. Certainly, there are no unequivocal records of crown-group bilaterians prior to the Cambrian, and robust evidence for bilaterian phyla does not occur until some 20 million years into the Cambrian.


This severely limits their ability to “calibrate” the molecular clock. Meyer granted the possible existence of three Precambrian phyla (sponges, molluscs, and cnidarians). But there are twenty other phyla that make their first appearance in the Cambrian, many of them far more complex than sponges. What good are the molecular methods if you can’t see any of the ancestors in the rocks?


Missing Ancestors

The authors admit that the Precambrian strata were capable of preserving the ancestors if they existed. 


No matter how imprecise, our timescale for metazoan diversification still indicates a mismatch between the fossil evidence used to calibrate the molecular clock analyses and the resulting divergence time estimates. This is not altogether surprising since, by definition, minimum constraints of clade ages anticipate their antiquity. Nevertheless, it is the extent of this prehistory that is surprising, particularly since the conditions required for exceptional fossil preservation, so key to evidencing the existence of animal phyla in the early Cambrian, obtained also in the Ediacaran.


The only way they can maintain their belief that the ancestors are way back earlier is to discount the fossil evidence as “negative evidence” and to put their trust in the molecular evidence. But how can they trust it, when the answers vary all over the place, depending on the methods used? One clever method is called “rate variation.” Would you trust a clock that has a variable rate? How about one fast-ticking clock for one animal, and a slow-ticking clock for another? 


When rate variation across a phylogeny is extreme (that is, when the molecular clock is seriously violated), the rates calculated on one part of the phylogeny will serve as a poor proxy for estimating divergence times in other parts of the tree. In such instances, divergence time estimation is challenging and the analysis becomes sensitive to the rate model used.


They try their trees with steady rates and with varying rates (“relaxed clock models” — amusing term). They try data partitioning. They try Bayesian analysis. None of them agree. Meyer discussed molecular clock problems in detail in Chapter 5 of Darwin’s Doubt. There’s nothing new here. “Here we show that the precision of molecular clock estimates of times has been grossly over-estimated,” they conclude. “….An evolutionary timescale for metazoan diversification that accommodates these uncertainties has precision that is insufficient to discriminate among causal hypotheses.” In the end, these evolutionists have to admit that fossils would be much, much better:


Above all, establishing unequivocal evidence for the presence of metazoan clades in the late Neoproterozoic, as well as for the absence in more ancient strata, will probably have more impact than any methodological advance in improving the accuracy and precision of divergence time estimates for deep metazoan phylogeny. Realizing the aim of a timescale of early animal evolution that is not merely accurate, but sufficiently precise to effect tests of hypotheses on the causes and consequences of early animal evolution, will require improved models of trait evolution and improved algorithms to allow analysis of genome-scale sequence data in tandem with morphological characters.


Wait a Minute

Isn’t that what Darwin provided — a model of trait evolution? Wasn’t it natural selection of gradual variations? Let’s parse this interesting quote that mentions Darwin:


The timing of the emergence of animals has troubled evolutionary biologists at least since Darwin, who was sufficiently incredulous that he considered the abrupt appearance of animal fossils in the Cambrian as a challenge to his theory of evolution by natural selection. There has been, as a result, a long history of attempts to rationalize a rapid radiation of animals through theories of non-uniform evolutionary processes, such as homeotic mutations, removal of environmental restrictions on larger body sizes, through to the assembly of gene regulation kernels — proposed both as an explanation for rapid rates of innovation followed by subsequent constraint against fundamental innovation of new body plans after the Cambrian. Indeed, there have been explicit attempts to accommodate rapid rates of phenotypic evolution in the early Cambrian, compatible with these hypotheses and a semi-literal (albeit phylogenetically constrained) reading of the fossil record.


And yet our results, as have others before them, suggest that there is no justification for invoking non-uniform mechanisms to explain the emergence of animals and their phylum-level body plans.


That phrase “semi-literal (albeit phylogenetically constrained) reading of the fossil record” is curious. How else are you supposed to read it? They are saying that you have to read the fossil record with Darwin-colored glasses to see it correctly. 


But they’re trying to have it both ways. They want a slow-and-gradual fuse leading up to the Cambrian explosion (disliking “non-uniform evolutionary processes”), which requires a non-literal reading of the fossil record with Darwin glasses on, but they can’t take the molecular data literally either, because it is so method-dependent. You can almost hear them crying out for fossils. As Meyer’s book shows, the fossil record is more explosive now than it was in Darwin’s time.


The Information Enigma Again

Notice how they mention “the emergence of animals and their phylum-level body plans.” How do you get the information to build a phylum-level body plan? Once again, these authors ignore the information issue completely. They say, “Much of the molecular genetic toolkit required for animal development originated deep in eukaryote evolutionary history,” skirting past that with a lateral reference to a paper about a microbe that had no animal body plan. Talk of “emergence” just doesn’t cut it. What is the source of the information to build an animal body plan composed of multiple new cell types and tissues, with 3-D organization and integrated systems like sensory organisms, locomotion, and digestive tracts? Is there an evolutionist who will please answer Meyer’s primary challenge? 

As we’ve seen over and over again, many Darwinian evolutionists think they have done their job if they can just push the ancestry back in time. The fossil record doesn’t allow it, but even if it did, it wouldn’t solve the information problem. Calling it “emergence” is unsatisfactory. Calling it “innovation” is unsatisfactory. Calling it latent potential waiting for environmental factors like heat or oxygen is unsatisfactory. Answer the question: what is the source of the information to build twenty new animal body plans that appeared suddenly in the Cambrian without ancestors? We have an answer: intelligence. What’s yours?

Reductive materialism fails to account for mind.

 Can Self-Organization Theory Account for Consciousness?

Evolution News @DiscoveryCSC

Cognitive neuroscientist Bobby Azarian, author of The Romance of Reality: How the Universe Organizes Itself to Create Life, Consciousness, and Cosmic Complexity (2022), offers a self-organization theory approach to the reality of the mind:


Most neuroscientists believe that consciousness arises when harmonized global activity emerges from the coordinated interactions of billions of neurons. This is because the synchronized firing of brain cells integrates information from multiple processing streams into a unified field of experience. This global activity is made possible by loops in the form of feedback. When feedback is present in a system, it means there is some form of self-reference at work, and in nervous systems, it can be a sign of self-modeling. Feedback loops running from one brain region to another integrate information and bind features into a cohesive perceptual landscape.


When does the light of subjective experience go out? When the feedback loops cease, because it is these loops that harmonize neural activity and bring about the global integration of information. When feedback is disrupted, the brain still keeps on ticking, functioning physiologically and controlling involuntary functions, but consciousness dissolves. The mental model is still embedded in the brain’s architecture, but the observer fades as the self-referential process of real-time self-modeling ceases to produce a “self.” 


BOBBY AZARIAN, “THE MIND IS MORE THAN A MACHINE” AT NOEMA (JUNE 9, 2022)

One difficulty that arises is that many human beings produce a “self” with split brains, a brain missing key components, or only half a brain, (or maybe less). That’s real but not consistent with the materialist model that Azarian outlines.


“The Missing Puzzle Piece”?

He goes on to say,


Could self-reference be the missing puzzle piece that allows for truly intelligent AIs, and maybe even someday sentient machines? Only time will tell, but Simon DeDeo, a complexity scientist at Carnegie Mellon University and the Santa Fe Institute, seems to think so: “Great progress in physics came from taking relativity seriously. We ought to expect something similar here: Success in the project of general artificial intelligence may require we take seriously the relativity implied by self-reference.”


BOBBY AZARIAN, “THE MIND IS MORE THAN A MACHINE” AT NOEMA (JUNE 9, 2022)

But wait. What’s this about “self”-reference? Machines, as we know them, don’t have a self.


Read the rest at Mind Matters News, published by Discovery Institute’s Bradley Center for Natural and Artificial Intelligence.

Monday 13 June 2022

A new earth?

 1Peter3:13KJV"Nevertheless we, according to his promise, look for new heavens and a new earth, wherein dwelleth righteousness."

Righteousness was never an issue for the spirit heaven. Although the bible shows that some of those privileged to dwell in the very presence of God himself chose to rebel and set up their own kingdom. From there they have exerted a malignant influence over mankind in general. The apostle Paul refers to them as the cosmocrats .

Ephesians6:12KJV"For we wrestle not against flesh and blood, but against principalities, against powers, against the rulers(Grk. kosmokratoras) of the darkness of this world, against spiritual wickedness in high places. "

Thus for human civilization peace, justice, liberty, brotherhood and the like have most definitely been issues. To say the least humankind's pursuit of these ideals has been characterized by frustration. Is this all part of JEHOVAH'S plan? Though some assert as much the scriptures support no such notion.

Genesis1:28KJV"And God blessed them,.." A promise of success, even spectacular success, in their endeavors was made to the founders of our race. Thus life on this earth was not to be characterized by privation and insecurity.

Genesis2:8KJV"And the LORD God planted a garden eastward in Eden; and there he put the man whom he had formed. 9And out of the ground made the LORD God to grow every tree that is pleasant to the sight, and good for food; the tree of life also in the midst of the garden,.."

Note please, that there was no tree of death in this original paradise planted by JEHOVAH himself, thus death and the deadly were not meant to have any place in man's future as purposed by JEHOVAH. It is man's arrogance not the divine will that has caused humanity's unfortunate detour from the Lord JEHOVAH'S intended blessing. And yet even after earning God's rightful anger we read of a promised blessing for humanity on this earth.

Genesis28:14KJV"And thy seed shall be as the dust of the earth, and thou shalt spread abroad to the west, and to the east, and to the north, and to the south: and in thee and in thy seed shall all the families of the earth be blessed."

Psalms37:9KJV"For evildoers shall be cut off: but those that wait upon the LORD, they shall inherit the earth."

Psalms46:9KJV"He maketh wars to cease unto the end of the earth; he breaketh the bow, and cutteth the spear in sunder; he burneth the chariot in the fire."

Psalms72:4-8KJV"He shall judge the poor of the people, he shall save the children of the needy, and shall break in pieces the oppressor.


5They shall fear thee as long as the sun and moon endure, throughout all generations.


6He shall come down like rain upon the mown grass: as showers that water the earth.


7In his days shall the righteous flourish; and abundance of peace so long as the moon endureth.


8He shall have dominion also from sea to sea, and from the river unto the ends of the earth."

Luke2:14KJV"“Glory to God in the highest heaven,


and on earth peace to those on whom his favor rests.”

The Lord JEHOVAH'S reassertion of his lawful sovereignty over mankind on this planet will ensure that JEHOVAH'S blessing of our race achieves its purpose. Politicians and their enablers may frustrate their fellow humans but they are no match for our planet's creator. 

Trinitarian's lack of self-awareness and John20:28.

 That John20:28 continues to be a Trinitarian favorite says volumes about Trinitarian apologists' total lack of self-awareness. For some context trinitarians have for years pilloried the new world translation's rendering of John1:1c,on account of its use of the indefinite article, insisting that Jesus is not a God.


John20:28ASV"28Thomas answered and said unto him, My Lord and my God" now one possibility is that the risen Christ being a superhuman messenger ,Thomas could be addressing JEHOVAH through Jesus the way the ancient prophets address him through his angels see exodus3. Of course Trinitarians are having none of that,they insists that Thomas called Jesus the God of me according to the Greek text. How could Jesus be the God of anyone without being a God.


John20:17ASV"Jesus saith to her, Touch me not; for I am not yet ascended unto the Father: but go unto my brethren, and say to them, I ascend unto my Father and your Father, and my God and your God. "


Here Jesus calls his Father the God of me. Thus obviously his Father is a God ,just as he is a Father of both Jesus and Thomas. So according to scripture Jesus is a Lord and a God. And his Father is a Lord and a God.  Maybe one can fudge a modalist interpretation out of these facts, but a Trinitarian interpretation is even beyond fudging.


A Gap to wide?

 How Darwin and Wallace Split over the Human Mind

Neil Thomas

Richard Dawkins’s The Blind Watchmaker begins with the grand claim that “our own existence once presented the greatest of all mysteries, but … it is a mystery no longer because it is solved, Darwin and Wallace solved it.” Leaving to one side the fact that this statement is a prime example of what writer and satirist Tom Wolfe has dubbed the temptation to “cosmogonism — the compulsion to find the ever-elusive Theory of Everything.”1 The statement is, at best, only half true. For Alfred Russel Wallace as early as the mid 1860s had parted company with Charles Darwin on the subject of the human mind, with its staggering complexity and unique language facility. For him, on more mature reflection, no simple ape-to-human progression was any longer tenable and he could no longer assent to the ontological equivalence of humans and nonhuman animals proposed by Darwin — and later subjected to a reductio ad absurdum by the philosopher Peter Singer, best known for his Animal Liberation (1975) and for his (seriously proposed) advocacy for a normalization of sexual relations between humans and animals.


Marvelously Free of Racism

Wallace had given much thought to his change of heart. Marvelously free of any racist prejudice even at the height of the colonial era, he had noted in his more than a decade of fieldwork in far-flung locations of the globe that primitive tribes were intellectually the equals of Europeans, even if not (yet) their equals at the technological level. “Savages” were, however, required to operate only in the context of simple activities where their great brainpower was redundant given the simplicities of their daily rounds. So, what was the point of their great mental powers and, more importantly, how had it evolved? After all, natural selection would not have been “called on” to enable them to perform cognitively challenging tasks for which there was presently no need. By extension, what was the survival value of musical and mathematical abilities for Europeans? These were patently not brute survival skills. How could they have been promoted by natural selection which favors only immediate utility since, as Darwin himself repeatedly stated, it had no power of foresight? Wallace eventually answered that question (to his own satisfaction) by claiming that “an influx of a higher life” had supervened to accompany the arrival of Homo sapiens on the world’s stage — a volte-face which disappointed Darwin and made Wallace the target of some opprobrium from Darwin’s supporters. 


Wallace and Natural Theology

In his older years Wallace came to reject natural selection as an explanation for the unfurling of all human and even animal life. By then he had transitioned towards the espousal of a form of natural theology; but his initial and gravest misgiving in the 1860s was focused four-square on the mystery of how the human brain could have evolved according to Darwinian lines of explanation. For Wallace it had become so clear that an additional power must have played a role that he thenceforth felt constrained to bid adieu to material modes of explanation. Rather like the adherents of the modern intelligent design trend, Wallace could not see how what is now termed “irreducible complexity” could have been thrown together by the only marginally discriminating forces of natural selection.


It is not difficult to sympathize with Wallace’s doubts. As Michael Ruse recently put it, “mind is the apotheosis of final cause, drenched in purpose … irreducibly teleological.”2 At the same time, however, Ruse puzzlingly and to me somewhat contradictorily contends, “Why should the evolutionist be expected to explain the nature of consciousness? Surely it can be taken as a given, and the evolutionist can move on … leave the discussion at that.”3 Wallace was certainly not prepared to accept such cherry-picking evasions and “leave the discussion at that.” And despite Dawkins’s transparent attempt to airbrush Wallace’s “apostasy” out of the historical record, the latter’s century-and-a half old question about natural selection’s inability to create the human mind has been maintained as a live issue by professional philosophers.


On Darwinian Principles

Wallace’s point was reprised by philosopher Anthony O’Hear who objected that evolutionary theory was inadequate to account for the emergence of the human mental and moral faculties. On Darwinian principles there was simply no source from which human morality and other higher faculties could have originated (all the less so if one believes that we as a species represent essentially a congeries of “selfish genes”):


How is it conceivable that consciousness should develop from unconscious precursors? There is no explanation to date and only those who believe that the difference between a cabbage or an automaton and a sentient human being is of small account will minimize the significance of this incomprehension.4


In other words, Darwinism simply cannot explain human nature to anything like its fullest extent. Both O’Hear and philosopher Richard Rorty have pointed to the plethora of “non-Darwinian motivations” in humankind, including that non-selfish moral compass which exists in all bar the most abject psychopaths. Hence O’Hear attacked the argument of Richard Dawkins when the latter insisted it was possible for humans to resist their selfish biological endowment in order to achieve more morally accountable human societies. Such moral resistance would not be logically possible if one holds to the strict doctrine of biological determinism. For given such a scenario, what resources would people have to draw on in order to escape the adamantine bonds of the deterministic straitjacket they were born into? There is then clearly a fatal logical contradiction in claiming that ethical behavior could be salvaged from the unyielding toils of biological determinism.5


As Anthony Flew once put it, “No eloquence can move pre-programmed robots.”6 It is therefore difficult to make a rationally justified case for the human mind having had the form of evolutionary history commonly imputed to it. Furthermore, the philosophical conclusion towards which Wallace was an early contributor has also come to be buttressed by an empirical discipline unknown in Wallace’s time — that of neuroscience, which throws valuable light on this philosophical issue, even, I would suggest, for those who publicly disdain the discipline of philosophy.


Philosophy and Neuroscience

Neuroscientist Donald Hoffman, who once worked with DNA co-discoverer Francis Crick in attempting to crack the problem of human consciousness, recently conceded that the nature and origins of consciousness remain “completely unsolved” and may best be termed an eternal mystery.7 The brusque and decidedly no-nonsense Crick was in the event fated to meet his Waterloo when it came to the subject of consciousness, explains Hoffman. Crick had at first attempted to explain it somewhat airily as nothing but an “emergent” property which “naturally” arose when matter reaches a certain level of complexity. However, he was at length obliged to withdraw that vacuous contention, conceding that there is nothing about conscious experience that is relatable to the physical stuff or material of the brain. Consciousness simply lies beyond our empirical perception and cognitive reach.


Hoffman develops the point further: “At the most microcosmic level the brain consists of subatomic particles which have qualities like mass, spin and charge. There is nothing about these qualities that relates to the qualities associated with consciousness such as thought, taste, pain or anxiety.”8 To suggest otherwise, continues Hoffman, would be like asserting that numbers might emerge from biscuits or ethics from rhubarb. The bottom line seems to be that we are not only ignorant but, alas, prostrate in our ignorance of the brain’s arcana.9 Theoretically, of course, there may yet emerge an as yet undiscovered materialist explanation for the brain and human consciousness. But to date we must conclude that today’s science cannot with integrity support such a claim on the evidence presently available.


Both Hoffman and Crick were finally forced to conclude that all purely physicalist theories of consciousness had failed to provide illumination and that the state of consciousness could not be explained in neurological terms, a conclusion powerfully endorsed for more than three decades by distinguished British neuroscientist Raymond Tallis in his long opposition to what he terms “Darwinitis.”10 In short, consciousness is simply not derivable from physical laws but remains an inexplicable phenomenon of the human endowment which we are simply left to wonder at. To suggest otherwise, writes philosopher David Bentley Hart, is to fall into the trap of a “misapplication of quantitative and empirical terms to unquantifiable and intrinsically non-empirical realities.”11 This indicates that vague, would-be Darwinian attempts to imagine consciousness arising as an “epiphenomenon” of other physiological processes are misconceived. In fact, not being able to identify the precise biological pathway leading to the claimed “epiphenomena” disqualifies this contention as a bona fide theory and relegates it to the status of little more than magical thinking (which I define as postulating an effect without an identifiable agent or cause).


Deconstructing Darwinian Postulates

It cannot be denied that there are philosophers content to follow the Darwinian line and even to become Darwinian apologists (and indeed cheerleading eulogists — such as Daniel Dennett). But there are very many more who feel a vocational duty to deconstruct Darwinian postulates and unmask their debatable pretensions. Remarkably, Richard Spilsbury felt so strongly on this point that he took to task an older generation of philosophers for being cowed by materialist confirmation bias into not addressing the problem. His remarks were directed at the logical positivist philosophers, in the orbit of Sir Alfred Ayer and his famous Language, Truth and Logic of 1936, for what he saw as their culpable silence on Darwinism.


As a matter of historical record, no group of thinkers was more inclined to denounce propositions for being “non-sense” (in the philosophical sense of not having sufficient logical stringency to merit serious discussion) than the logical positivists. Yet no criticism of Darwinism issued from within that group. Spilsbury’s explanation for the omission seems all too plausible: “It is rather surprising that they [Darwinists] have largely been left alone by logical positivists in search of new demolition work. Perhaps neo-Darwinism has been saved from this [demolition] by its essential contribution to the world view that positivists share” 12 (i.e., materialism). Given that the underlying aim of the Ayerian philosophy was broadly speaking to make the world a safe place for positivism, by discouraging any form of mysticism or metaphysics, I find Spilsbury’s explanation entirely convincing. Nonsense can apparently be exempted from critique when it supports the materialist cause. 


It is uncertain how future generations will react to theories without evidential foundation, simply at the paternalistic direction of scientists riding high on materialist hobbyhorses. Common experience suggests that many persons today are inclined to resist unsubstantiable theories in favor of their own tried-and-tested observations of reality. And the rise of intelligent design thought may be understood as a manifestation of this more precise, empirical mode of thinking. It cannot therefore be stressed strongly enough that inferences to a designing power (of some sort) is not, pace Dawkins, always anchored in an adherence to a particular revealed faith. People now are considerably less swayed by deference and 19th-century fideism (believing on trust). In fact, the (historically) paradoxical truth is that for growing numbers of people today it is science that points in the direction of an “unmoved mover” more than any “positive” or revealed religion — hence Anthony Flew’s well-publicized defection from non-theistic rationalism to a form of deism which he dubbed his “pilgrimage of reason.”


Inference to the Best Explanation

In that remarkable philosophic odyssey, the erstwhile president of the British Rationalist Society finally arrived at an understanding of the world as disclosed to him by natural theology, the multitudinous signatures of which he interpreted as empirical markers for a design which, pace Lucretius, David Hume, Darwin, Richard Dawkins, Stephen Hawking, and Lawrence Krauss could not have arisen “autonomously” without a designer. For Flew as a professional logician, such a position simply represented the inference to the best explanation. He came to reject chance in the sense of the fortuitous configurations and re-configurations of matter postulated by Lucretius (and, mutatis mutandis, by Darwin with reference to the organic world). He found his a more rational explanation than that offered by those of Darwin’s intellectual heirs who seem to be more interested in cooking the books to protect materialist assumptions from theistic incursions than in facing up to the inadequacies of a science which dramatically contradicts their own philosophical case. For such ideologically tainted denials can sometimes seem to represent little more than a covert desire to throw a protective cordon sanitaire around the theory of a purely material genesis for the biosphere and so stifle further debate.


In Wallace’s Footsteps

The acceptance and promotion of what is strictly speaking non-discussible nonsense (in the Ayerian sense)13 by groups of people supposedly devoted to the truth wherever it leads provides a disquieting specter of intellectual integrity playing second fiddle to ideological commitment. In fact, the attempt by more doctrinaire scientific materialists to bounce lay persons into gainsaying their own rational judgments results in a truly incongruous situation. That is, when big science brings forward a host of findings which might most fairly be glossed as prima facie proofs of a higher agency, but thereupon proceeds to deny the most intuitively logical import of its own discoveries, unbiased men and women prove unsurprisingly resistant. That resistance arises from their ability to appreciate the true existential implications of said findings and their entirely consequential determination to cry “Foul!” to the scientists for trying to mislead them. Such persons are in effect following in Wallace’s footsteps, without of course in most cases being fully aware of the historical recapitulation. And this in turn furnishes a very good argument why Wallace should not be erased from the Darwinian narrative. Indeed, welcome historical revisions have been set in train in the last decade, much of that from the pen of Michael Flannery.14


What is impressive about Wallace’s testimony is the without-fear-or-favor intellectual independence it reveals. He suffered no disabling sense of self-consciousness about doing his U-turn from his earlier opinions. He simply accepted the unexceptional fact that persons’ opinions will change over time according to how they come to revisit evidence on more mature reflection. Wallace was, as Frank Turner once put it, primarily a disinterested student of life with no interest in orthodox posturing, even after numerous honors had been bestowed upon him later in life.15


Darwin, on the other hand, found himself in a very different situation, being oppressively aware of the luster of the family name, especially as it pertained to his grandfather, Erasmus. His insistence that his theory had to be true for the sake of personal and family honor may do much to explain his state of obdurate denial when coming up against the many counter-indications to it which he encountered, even from close colleagues such as Thomas Huxley. His intransigence in facing opposition seems to have stemmed from a form of duelist’s point d’honneur. This attitude of mind had already been detectable in the way that he had worked at a break-neck pace to produce the manuscript of the Origin for publication when, after receiving Wallace’s famous Ternate Letter in 1858, he sensed a competitor snapping at his heels.16 It was clearly important to him to be able to have the Darwin imprimatur embossed on his evolutionary ideas. In that way he could both underscore his own status amongst his peers and also be seen to be consummating the glorious tradition of evolutionary speculation inaugurated by his grandfather. For Darwin was for all his adult life concerned with a peculiarly familial construction of reality the truth-value of which he never questioned. He framed his life’s work as a consummation of his grandfather’s endeavors to prove evolution — which was why he was so gratified to be able to advance what he took to be a mechanism to account for evolutionary ideas first advanced by Erasmus Darwin. 


No Intellectual Pedigree

By contrast, Wallace had no intellectual pedigree to live up to. Natural selection was only one part of his life as a naturalist and intellectual17 and he was well able to keep things in perspective. That was all the more so since he had no grand family tradition to live up to. Family piety was simply not a consideration for him since his grandfather had not been a famous naturalist pushing the envelope ever further in quest of illumination of the unknown. For that reason, I find that there is more trust to be placed in Wallace’s cool-headed testimony than there is in Darwin’s desperate denials that there “could be any other explanation.” Wallace was his own man and this bestowed on him the inner strength to follow the evidence where it led him without feeling the need to trim his position in apprehension of how others might react. He seems not to have felt anything like the need shown by Darwin to impress public opinion or pose as a Great Man of Science. And this, I would argue, makes his testimony concerning the fatal weakness of the theory of natural selection all the worthier of heed.


Notes

Tom Wolfe, The Kingdom of Speech (London: Jonathan Cape, 2016), p. 20.

On Purpose (Princeton: Princeton UP, 2018), p. 182.

On Purpose, p. 182.

Beyond Evolution Human Nature and the Limits of Evolutionary Speculation (Oxford: Clarendon, 1999), p.65.

Beyond Evolution, pp. 213-14.

There IS a God (New York: Harper Collins, 2007), p. 81.

Donald D. Hoffman, The Case against Reality: How Evolution hid the Truth from our Eyes (London: Penguin, 2020), pp. 1-21, citation p. 6.

The Case against Reality, pp. 60-61.

See also on this general point Steve Taylor, Spiritual Science: Why Science Needs Spirituality to Make Sense of the World (London: Watkins, 2018).

See for instance Tallis’s Aping Mankind: Neuromania, Darwinitis and the Misrepresentation of Humanity (Durham: Acumen, 2011).

Atheistic Delusions: The Christian Revolution and its Fashionable Enemies (New Haven/London: Yale UP, 2009), p. 7.

Providence Lost, p. 21.

I repeat that I am using this term in the strict philosophical sense of a proposition admitting of no form of rational analysis which could form a legitimate part of discursive practice.

See his Nature’s Prophet: Alfred Russel Wallace and His Evolution from Natural Selection to Natural Theology (Alabama: Alabama UP, 2018). 

Frank M. Turner, Between Science and Religion: The Reaction to Scientific Naturalism in Late Victorian England (New Haven and London: Yale UP, 1974), pp. 72-3.

Wallace had dispatched a letter to Down House laying out very similar evolutionary ideas to those hit upon by Darwin himself, and this essentially bounced Darwin into publishing his Origin of Species only one year later (on November 24, 1859).

Wallace lived well into the early 20th century when he made a considerable name for himself by his contributions to cosmology and in a broader sense to debates in the capacity of what we would now term a public intellectual.