Search This Blog

Tuesday 9 January 2024

Yet more on how I D is already mainstream.

 

On the maths of Darwinism.

 

A peek at a "science based morality"?

 

Not merely intelligent but ingenious design II.

 Nature Reveals Not Just Design but Genius


Almost 50 years ago, physicist Steven Weinberg wrote that “[t]he more the universe seems comprehensible, the more it also seems pointless.” But is our universe really just a meaningless accident? Or can we detect true genius by studying its workings? On a new episode of ID the Future, we are pleased to share the first half of an interview with Dr. Jonathan Witt on the Denison Forum podcast about a cosmos charged with meaning and purpose. In their book A Meaningful World, Dr. Witt and co-author Benjamin Wiker develop a philosophical argument that the more we learn about the universe, the more it seems laden with meaning. Dr. Witt discusses this argument with host Mark Turman. 

In Part 1, Witt shares his personal journey of faith and notes why he became skeptical of Darwinism. He discusses why he and Wiker wrote their book, describing the volume as an antidote to the materialist thinking that has dominated academic and scientific circles for the last 150 years. Witt explains that after studying the hallmarks of genius in humans, they looked for the same characteristics in nature, finding bountiful examples of the same challenges, surprises, mystery, and elegance one expects from a work of genius.

Download the podcast or listen to it here

On the price of convergent serendipity.

 Convergent Evolution: An Argument That Comes at a Price


Can the laws of nature explain the biological information in human beings and other creatures? In his recent book The Compatibility of Evolution and Design, theologian Rope Kojonen argues that they can. My colleagues and I reviewed the book in the journal Religions and have been critiquing it here.

I turn now to convergent evolution, which is Kojonen’s strongest positive argument for the laws of form, or what he calls the “library of forms.” This argument is significant because these laws of form play a crucial role in Kojonen’s positive case for design. In his view, the laws of form arise from designed laws of nature and, in turn, they vitally shape the “fine-tuned” preconditions that help make evolution possible. So, Kojonen’s convergence argument is a crucial part of his case for design. It also plays a key role in his account of how design supplements evolutionary processes in just the right way.

To understand why this is problematic, it helps to know more about convergence. Kojonen says that convergence means “the independent evolution of the same biological outcome in two or more different lineages, beginning from different starting points” (Kojonen 2021, p. 125). He notes, for example, that “dolphins and sharks have similar streamlined bodies and dorsal fins, even though dolphins are mammals and sharks are fish.” He also says that “paddle-shaped limbs for swimming have evolved independently seven times, and a structure as complex as the eye has evolved independently 49 times…” (p. 125). The word “convergent” is used to describe these examples because, in general, multiple lines of evidence — usually from genetics, paleontology, biochemistry, systematics, and similar fields — indicate that it is hard to make a coherent phylogenetic account of how they came to be from a given common ancestor. Under the idea of common ancestry, these facts are considered odd. Thus, evolutionists say that they are the result of convergent evolution.

A Stacked Deck

Kojonen sees this convergence as evidence that laws of form “play a significant role” in helping evolutionary processes cluster around similar solutions (p. 125). He comes to the conclusion that convergence shows “functional constraints have a big effect on the evolution of life like that on Earth” (p. 127). The general idea, expressed as a rhetorical question, seems to be: If the same solutions came up independently over and over again, doesn’t that suggest that the deck was probably stacked to help evolution succeed? 

The first problem is that convergence needs not only to evolve certain complex proteins, traits, and systems but also to evolve these things on their own more than once. If proteins are rare and isolated (as our review establishes) and the chances of even a single short protein evolving once in the whole history of the earth are too low, then, all other things being equal, the chances of similar proteins evolving more than once are even lower. This is amplified when scaled up to protein complexes, cell types, tissues, and organs, again demonstrating why the strength of the scientific evidence is crucial. If unguided evolution was not the cause of convergence, Kojonen’s argument that convergence supports the reconciliation of evolution and design would also fall apart.

A Dilemma

Second, and more importantly, Kojonen’s model is stuck in a paradox. This problem is a version of “Sober’s Paradox,” which is a term used by philosopher of biology Paul Nelson (Nelson 2022). Kojonen’s ideas about common ancestry and his ideas about convergence are at odds with each other. His attempt to agree with both sides breaks up the model internally.

Accept Convergence, Lose Common Ancestry

For instance, if convergence of forms is the result of being constrained by the laws of physics and chemistry, then Kojonen’s co-option response to Behe’s argument about irreducible complexity loses some of its power. This is because co-option only makes sense if there are similar protein parts in other systems that could be changed into the specialized parts needed for the bacterial flagellum. Kojonen says this about the flagellum: “The fact that similar parts exist in other systems, for example, does show that evolution is possible” (p. 118). But with “convergent evolution,” parts or systems that are even more complicated than the flagellum of a bacterium can develop without any similarity or common ancestry. If that’s true, what real power does co-option have?

Let’s also think about how true evolutionary convergence hurts Kojonen’s case that proteins evolved over time. As the story goes, mutation and natural selection can transform one functional protein into another. But if convergence is happening, wouldn’t it be easier for evolution to just make a new protein? Otherwise, if it is just as likely that evolution can lead to big changes as it is to lead to small ones, why talk about gradual changes moving one functional protein to another? Again, the argument has lost its power.

As we’ve seen, Kojonen’s belief in evolutionary convergence hurts the case for common ancestry in standard evolutionary theory. Evolutionary biologists usually think that similar structures can best be explained by having a common ancestor with a similar structure, i.e., it is far more likely that a complex trait evolved once instead of twice. If complex features are just as likely to appear on their own, then it is very hard to prove that two organisms share a common ancestor (Luskin 2017).

Accept Common Ancestry, Lose Convergence

What about the other side of the dilemma? If Kojonen accepts common ancestry, then what follows for his case for convergence and, by extension, his case for the “laws of form”? It seems like two results follow. First, he can no longer explain a lot of biological phenomena. This is because he says that convergence is “ubiquitous.” 

Second, in Kojonen’s model, convergence is part of his case for the laws of form. And these laws play a crucial role in the “fine-tuned” preconditions that help make evolution possible. Given that these preconditions arise from designed laws of nature, they play a vital part of Kojonen’s overall account of design. But if Kojonen accepts common ancestry (and its standard justification), then he loses a crucial element of his particular account of design. This greatly harms the heart of his model, which is to defend a certain view of design and its compatibility with evolutionary theory.

So Kojonen’s model has internal inconsistencies. He is basically stuck between a rock and a hard place. Kojonen’s understanding of (and justification for) “design” conflicts with both his own reasoning (about co-option and protein evolution) and the justification of common ancestry, which is a mainstay of evolution. So while Kojonen’s study of the laws of form is one of the most interesting ways he looks at design, this argument comes at a very high price.

References

Branscomb, Elbert, and Michael J. Russell. 2018. “Frankenstein or a Submarine Alkaline Vent: Who Is Responsible for Abiogenesis?: Part 1: What Is Life-That It Might Create Itself?” BioEssays: News and Reviews in Molecular, Cellular and Developmental Biology 40 (7): e1700179.
Djamgoz, Mustafa B. A., and Michael Levin. 2022. “Bioelectricity: An Update.” Bioelectricity 4 (3): 135–135.
Dobson, Christopher M. 2004. “Chemical Space and Biology.” Nature 432 (7019): 824–28.
Ellis, George. 2023. “Quantum Physics and Biology: The Local Wavefunction Approach.” arXiv [quant-Ph]. arXiv. http://arxiv.org/abs/2301.06516v10


Dr. Hector Zenil vs. The Sphinx.

 

Monday 8 January 2024

The thumb print of JEHOVAH is more obvious than ever?

 

The first humans are just as human as present day humans?

 Childhood in the Ice Age — What Was It Like?


In Aeon late last winter, University of Victoria archaeologist April Nowell offered insights into the lives of children in the Paleolithic era, roughly 40,000 to 10,000 years ago. The surprising thing is how much we actually know about that.

Nowell, author of Growing Up in the Ice Age (Oxbow Books 2021) points out, in addition to Caves of Lascaux-level archeological finds that make world news, a wealth of additional information, put together, tells us more than we might have expected about Stone Age life.

For example, footprints embedded in soft earth or mud in and near a cave tell us that a family, burning bundles of pine sticks to light their way, crawled through a cave called Grotta della Bà̀sura 14,000 years ago. The footprints belonged to two adults, an 8-to-11-year-old, a 6-year-old, and a 3-year-old.

Having reached a point now known as the “Sala dei Misteri,” they left signatures of their time there: “While the adults make charcoal handprints on the ceiling, the youngsters dig clay from the floor and smear it on a stalagmite, tracing their fingers in the soft sediment. Each tracing corresponds to the age and height of the child who made it: the tiniest markings, made with a toddler’s fingers, are found closest to the ground.”

Then They Left

Their pine torches left charcoal traces on the walls. What were they doing? We will never know for sure but it seems like a ritual of some sort.

We know other things about the lives of children back then as well. One is that they had to learn the art of making stone tools (knapping). Examining the masses of struck-off fragments, archeologists can tell which ones were produced by novices who had not yet perfected the art.

But, says Nowell, we have evidence of children at play too:

Other studies of footprints, this time from 13,000-year-old sites in Italy and France, document children and teens running around playing tag, making ‘perfect’ footprints the way kids do today at the beach, and throwing clay balls at each other and at stalagmites — some of the pellets missed their targets and remain on the cave floor. Skills were honed through play in other ways: at Palaeolithic sites in Russia, researchers found 29 clay objects that, by analysing traces of fingerprints, were determined to be made by children between the ages of six and 10, and adolescents between 10 and 15. Ethnographically, we know that children often begin to learn ceramics by first playing with clay, making toy animals and serving bowls.

APRIL NOWELL, “CHILDREN OF THE ICE AGE,” AEON, 13 FEBRUARY 2023

And Sadder Evidence as Well

One young child from 10,000 years ago was buried in clothing with hundreds of beads lovingly sewn in.

When telling us of our ancestors, researchers often hold out the hope that the information they painstakingly accumulate will shed light on human development. From the fragments gathered so far, it seems we have no evidence for a history of the human mind, only the history of human technology.

It's a social contagion?

 

Atheism fails on its own terms?

 

On junk science re:junk DNA?

 

Sunday 7 January 2024

On separating real from apparent design in nature

 Stephen Meyer: Evidence of Mind in the Natural World


Can we scientifically detect the activity of a mind behind the universe? On a new episode of ID the Future, philosopher of science Dr. Stephen Meyer answers this question and more in the concluding hour of a new two-hour interview on various topics related to his work and books. The interview was recorded in the fall of 2023 by Praxis Circle, a worldview-building organization that promotes open dialogue around life’s biggest questions. The word praxis harkens back to Latin and Greek as a word for practice, action, or doing. So praxis refers to the process of interaction between our worldview — our conception of reality, our view of the world — and our practice of living and acting in it. It’s an interesting mental space to begin a dialogue.

A word on the format of the interview. The interview host is Doug Monroe, and you’ll hear him at various intervals. However, the discussion was recorded specifically to be broken up into 39 short videos, so most of the time you won’t hear the question being asked — just Dr. Meyer’s response. The questions he answers are often connected and follow a logical progression exploring Dr. Meyer’s books and arguments. Plus, Dr. Meyer usually begins his answer by paraphrasing the question, so you’ll have a good idea what he’s talking about as he begins each new answer.

In case it’s helpful, here’s an outline of the topics covered by Dr. Meyer in this second hour of the interview:

How Christianity Sparked the Scientific Revolution
How Human Fallibility Led to Development of a Scientific Method
Scientific Materialism and Philosophical Skepticism
Where Intelligent Design Stands in the Scientific Community Today
The Argument of Signature in the Cell
The Universe’s Origin and Quantum Physics
Every Philosophical System Posits a Prime Reality
Cosmological Data that Points to God?
Our Fine-tuned Universe
Fine-tuning vs. the Multiverse
Applying Occam’s Razor
The Low Creative Power of Darwinian Mutations
The Problem with Theories of Everything
Theistic Evolution: an Oxymoron
Do Miracles Violate the Laws of Nature?
A Good Theology of Nature
Society’s Ultimate Problem
Download the podcast or listen to it here.

Saturday 6 January 2024

Winning in an empty stadium?

 

Chess: a brief history

 

How science enables an intellectually satisfying theism.

 Stephen Meyer: Scientific Arguments for a Theistic Worldview


Are there strong scientific arguments for theism? Is there such a thing as objective morality? How is a worldview built? On a new episode of ID the Future, philosopher of science Dr. Stephen Meyer answers these questions and more in the first hour of a new two-hour interview on various topics related to his work and books. The interview was recorded in the fall of 2023 by Praxis Circle, a worldview-building organization that promotes open dialogue around life’s biggest questions. The word praxis harkens back to Latin and Greek as a word for practice, action, or doing. So praxis refers to the process of interaction between our worldview — our conception of reality, our view of the world — and our practice of living and acting in it. It’s an interesting mental space to begin a dialogue.

A word on the format of the interview. The interview host is Doug Monroe, and you’ll hear him at various intervals. However, the discussion was recorded specifically to be broken up into 39 short videos, so most of the time you won’t hear the question being asked — just Dr. Meyer’s response. The questions he answers are often connected and follow a logical progression exploring Dr. Meyer’s books and arguments. Plus, Dr. Meyer usually begins his answer by paraphrasing the question, so you’ll have a good idea what he’s talking about as he begins each new answer.

In case it’s helpful, here’s an outline of the topics covered by Dr. Meyer in this first hour of the interview:

Founding of Discovery Institute
Definition of worldview
Dr. Meyer’s own worldview journey 
Epistemology and the Judeo-Christian idea of intelligibility
Mind-body problem of consciousness
The need for objective, rational arguments for theism
Importance of philosophy
Materialism, relativism, and objective morality
The fact/value divide
Newton and Leibniz debate: gravity and God
The nature of information 
This is Part 1 of a two-part interview. Download the podcast or listen to ithere

Leviathan.

 

James Tour vs. Lee Cronin:more post game commentary.

 

Friday 5 January 2024

Science for sale?

 

The fossil record bears witness to design?

 

Yet another clash of Titans :a new opening?

 

Birds aren't little dinosaurs?

 Fossil Friday: New Evidence Against Dinosaur Ancestry of Birds


This Fossil Friday we revisit the ancestry of birds, with the featured skeleton of the Late Cretaceous bird Hesperornis gracilis, exhibited at the Natural History Museum in Karlsruhe, Germany. Hesperornis was a flightless and toothed marine bird, somewhat similar to modern penguins, and lived contemporaneous with some of the raptor dinosaurs that are well known from the Jurassic Park movies.

Few hypotheses in evolutionary biology have become as popular among lay people as the postulated ancestry of birds from bipedal dinosaurs. Indeed, many a school kid will tell you proudly that birds simply are surviving dinosaurs. The theropod ancestry of birds has become an evolutionary dogma that is almost universally accepted and taught as the consensus view. However, there are a few dissenters, among whom the paleornithologist Alan Feduccia from the University of North Carolina certainly is the most prominent. He famously coined the term “temporal paradox” for the fact that the fossil record of the assumed theropod stem group of birds tends to be younger than the oldest actual birds. Last week I discussed new evidence that makes this temporal paradox much worse (Bechly 2023).

Beyond the Fossil Record

However, Feduccia’s critique of the dinosaur-bird hypothesis is not based just on problems with the fossil record, but also on conflicting evidence from comparative anatomy. Now, he presents new evidence that still more sharply contradicts the consensus view. One of the arguments for a dinosaur-bird relationship has been the presence of a so-called “open” acetabulum, which “is a concave pelvic surface formed by the ilium, ischium, and pubis, which accommodates the head of the femur in tetrapods.” Feduccia (2024) studied the acetabulum in early basal birds and found that their acetabulum tends to be partially closed and an antitrochanter (process of the ischium or iliac) is absent. This casts strongly into doubt one of the key characters for a dinosaur-bird relationship and suggests that this hypothesis must be re-evaluated. The fact that microraptorids and troodontids “also exhibit partial closure of the acetabulum and lack an antitrochanter is a further incongruity in that these taxa should exhibit “typical” theropod pelvic girdle modifications for terrestrial cursoriality.” This could support the view of several experts (e.g., Martin 2004, and various studies cited by Feduccia), that these maniraptoran taxa represent secondarily flightless birds rather than theropod dinosaurs.

Feduccia concluded his new study with this remarkable statement:
                 The hypothesis that birds are maniraptoran theropod dinosaurs, despite the certitude with which it is proclaimed, continues to suffer from unaddressed difficulties … Until problems like those discussed here — and many others that continue to be dismissed either by appeal to “consensus” or through overconfidence in the results of phylogenetic analysis of morphological data — have satisfactorily been resolved, skepticism toward the current consensus and continued investigation of alternative hypotheses are needful for the promotion of critical discourse in vertebrate phylogenetics and evolutionary biology.
                  Birds and dinosaurs may not represent arbitrary chunks of an evolutionary grade after all, but may instead represent distinct natural kinds. At the very least, the evidence seems to be much more ambiguous, weaker, and less convincing than most evolutionary biologists love to pretend.



Germania and the rise of the west.

 

The inability of chance and necessity to navigate the fitness landscape.

 Trapdoors in the Fitness Landscape: Scientists Revive Worries About an Evolutionary Metaphor  


Sewall Wright concocted one of those metaphors in science that lingers long past its “best by” date. In 1932, he coined the term “the fitness landscape.” He envisioned a mythical land of peaks and valleys, with the peaks indicating higher fitness, and valleys populated by evolving organisms starting out on their journeys toward progressively higher fitness levels. Impelled by the struggle for existence, organisms would climb higher till reaching a peak. One difficulty with this picture appeared soon after the metaphor gained popularity: to get to a higher peak, an organism would have to climb down, lowering its fitness on the way to a neighboring peak. Some workarounds 7were concocted, but the evocative metaphor lent itself to 3-D graphs and formulas of positive selection, giving evolutionary biologists hopes of empirical rigor as they measured their research organism’s progress up the landscape.
                
Stability of the Landscape

Unwarranted assumptions are the bugaboo of clever models like this. One is the stability of the landscape. Does the hypothetical landscape undulate over time, such that a peak at one epoch becomes a valley in another? After all, the dynamic environment is oblivious to the needs of organisms. How quickly does a given habitat change? How can evolutionists be sure that fitness for a savannah does not become a detriment if the population finds itself in a habitat undergoing desertification? For reasons like this, Mustonen and Lässig in 2009 dubbed it a fitness “seascape” instead of a landscape.

Another of Wright’s assumptions was that the fitness landscape follows Gaussian curves consisting of smooth lines without discontinuities. Even if some of those Gaussian curves rose steeply like a cliff, Darwin defenders like Richard Dawkins could get their organisms up to the summit of Mount Improbable by envisioning a gradual staircase from another direction, allowing natural selection to maintain Darwin’s narrative of the accumulation of small, incremental steps.

But what if the Gaussian assumption is wrong? What if, instead, the structure of the landscape is like a block of Swiss cheese, flat and riddled with holes that a blind watchmaker cannot foresee? The probability of a “holey” fitness landscape becomes credible when considering dependent traits. These “quantitative traits” are made up of components that must cooperate to work. Without all of them emerging simultaneously, no organism can ascend to higher levels. With dependent traits in operation, such as in the case of powered flight, a mutation or defect in one can send the organism to immediate extinction — as if, as in the old board game, a trapdoor opened underneath

This Is Not a New Worry

Sergey Gavrilets thought of this back in 1997 and raised it again in 2004. Now, worries about a “holey landscape” have been given new emphasis in a paper in PNAS — “Drift on holey landscapes as a dominant evolutionary process.” The four authors, from universities in North Dakota, California, and Paris, complain that this worry has been largely neglected.

Our understanding of selection has been strongly shaped by Sewall Wright’s conceptualization of an evolutionary landscape, with populations moving from areas of low fitness to areas of higher fitness. While the one- and two-trait landscapes Wright originally described have been criticized as unrealistic, including by Wright himself, the general metaphor has nonetheless guided much of evolutionary thought

What if the metaphor has “guided much of evolutionary thought” astray? Then, the “understanding of selection” has been shaped awry.

An important conclusion from this research is that evolutionary dynamics on simple landscapes often fail to properly predict evolution on higher dimensional landscapes. Empirical research into quantitative traits has been slow to incorporate this need for a higher-dimensional perspective.

Perhaps most conceptually unfamiliar and unintuitive to researchers focused on quantitative traits are holey landscapes (Fig. 1C; ref. 16). Holey landscapes are high-dimensional evolutionary landscapes that consist of trait combinations that are either of average fitness or that are inviable. This results in flat landscapes with holes at inviable or low fitness phenotypes (Fig. 1C)

Quantitative traits comprise “many aspects of physiology, behavior, and morphology,” they say, illustrating them with things like “most behaviors, physiological processes, and life-history traits.”

To investigate whether such traits tend to be distributed on a Gaussian landscape or a holey landscape, they looked at genetic variations in sixty species, including animals and plants. They found the results to be consistent with “high-dimensional, holey landscapes” instead of simplistic single-peak depictions or “badlands” landscapes consisting of rolling hills and gentle valleys. This suggested to them that

the leading conceptualizations and modeling of the evolution of trait integration fail to capture how phenotypes are shaped and that traits are integrated in a manner contrary to predictions of dominant evolutionary theory.Our results demonstrate that our understanding of how evolution has shaped phenotypes remains incompleteand these results provide a starting point for reassessing the relevance of existing evolutionary models.

Scrap and Start Over?

One way to reassess the relevance of an existing model is to scrap it and start over. The authors are not ready to try that, but they do point to serious shortcomings of conventional models: for instance, missing the holes.

Even more importantly, it is unknown what the topography of landscapes is for natural populations. While portions of selection surfaces and landscapes can be directly estimated, these estimates may differ from the full landscape due to several factors. These include the omission of fitness-affecting traits, incomplete estimation of fitness, and insufficient power to estimate non-linear selection coefficients.

In effect, modelers using Wright’s metaphor are building imaginary landscapes in thin air instead of working with real plants and animals on the real earth that must eat and survive. Simplistic Darwinian models of selection presuppose that beneficial mutations add up. This is not necessarily the case. A benefit to one gene can be a detriment to another — an example of negative pleiotropy. For realism, the whole animal must be considered, lest negative correlations open up a trapdoor that ends that organism’s progress, sending it down into one of those “inviable” outcomes. The authors call for better answers to the “crucial questions we have raised.”

Why would a holey landscape be flat? The authors explain what the model predicts for quantitative traits:

This topography stems from the multivariate nature of phenotypes: while there may be continuous fitness differences in two dimensions, fitness gradients will create holes in the landscape and peaks will average out when additional traits are considered. Unfortunately, predictions about quantitative trait evolution on holey landscapes are not clear and have rarely been pursued (e.g., ref. 18).

This means that traits are not isolated in one or two dimensions, but are interdependent on other traits in additional dimensions. This multi-dimensional consideration of traits on a flat landscape suggests that organisms are already at their optima; the fitness peaks have averaged out. The only way left is down, falling through a hole like a trapdoor if a trait changes that other traits depend on. 

Irreducible Complexity and Devolution

The picture fits Michael Behe’s concepts of irreducible complexity and devolution. No part of a mousetrap should be considered in isolation. It may have a great spring, but if the other parts are weak or absent, the trap will not catch mice. Neo-Darwinism’s focus on positive selection of individual genes or traits, therefore, misses the holistic multi-dimensional view of organisms as functional wholes, to borrow Douglas Axe’s phrase. To function, a multi-dimensional trait must reach a threshold of coherence among its parts. These can be considered a list of design requirements.

Can neo-Darwinism recover from the neglected view of holey landscapes? The authors do not offer any hope, other than to wish that better landscape models may be forthcoming. Even with that concession, they remain pessimistic.

Our implementation of Wright’s metaphor represents only one of many possible evolutionary models. It is possible that unmodeled alternative landscapes may produce populations for which variation is distributed in a manner similar to holey landscapes and empirical estimates…. Importantly, and as mentioned previously, much of the exploration of evolution of quantitative traits has focused on simple landscapes like we have implemented here. Thus, it also is an open question what different models of selection “look” like when implemented for higher-dimensional phenotypes. For example, rugged landscapes of high dimensionality may give rise to holey landscapes as peaks average out and valleys are inviable…. Nonetheless, the close correspondence between empirical data and populations simulated as evolving on holey landscapes suggest that our understanding of quantitative trait evolution remains incomplete.

Regardless of any other possible devices to rescue the picture of progress up fitness peaks, they assert that “our finding that observed patterns of quantitative genetic variation across taxonomic groups are not consistent with traditional evolutionary models stands.”

This disconnect between observed patterns of multivariate variation and expectations under conventional models of selection suggests that Wright’s metaphor of landscapes — and the subsequent implementation of this metaphor as Gaussian surfaces — may have contributed to an incomplete understanding of how selection has shaped phenotypes. A potential contributor to this problem has been the lack of clear alternative explanationsbesides a simple null hypothesis of drift with no selection. Moving forward, clear development of additional alternative models of the action of selection and evolution in multivariate space is needed. This will allow the comparison of simulated populations to empirical data as we have done here.

Ultimately, our findings suggest that evolutionary biologists need to better consider the effects of high dimensionality as simple standard evolutionary models are not consistent with available data for quantitative data

Design Trumps Darwinism

Science is supposed to be about observable, quantitative data, isn’t it? If observations do not fit Wright’s convenient metaphor, the metaphor must be revised or discarded. Intelligent design is consistent with flat optima and built-in mechanisms to detect and avoid holes (e.g., DNA error correction, immune systems, blood clotting). Once again, design with its emphasis on engineering specifications trumps Darwinism in the real world.






Yet another clash of Titans : beware of greeks bearing gifts.

 

Cicero echoes our brother Paul.

 Cicero on Intelligent Design — Sound Familiar?


Yesterday was Cicero’s birthday. To celebrate, here’s my favorite quote from the Roman philosopher. From my Book Finding Truth: 5 Principles for Unmasking Atheism, Secularism, and Other God Substitutes:

“Yet we don’t really need the latest findings from science to recognize that a mind is needed to explain the universe. In every age, people have realized that an intelligible universe must be the product of intelligence.”

In ancient Rome, the Stoic philosophers offered an argument from design that sounds very familiar to modern ears. In the century before Christ, the great Roman orator Cicero wrote, “When we see something moved by machinery, like an orrery [model of the planetary system] or clock or many other such things, we do not doubt that these contrivances are the work of reason.”

He then drew the logical conclusion: “When therefore we behold the whole compass of the heaven moving with revolutions of marvelous velocity and … perfect regularity …, how can we doubt that all this is effected not merely by reason, but by a reason that is transcendent and divine?”

Sounding almost biblical in his language, Cicero wrote, “You see not the Deity, yet … by the contemplation of his works you are led to acknowledge a God.”

Clearly, people in the ancient world were capable of “reading” the message of general revelation in nature. The opening theme in Romans 1 is that anyone can conclude that the created order is the product of an intelligent being.

Ps.Romans Ch.1:20NIV"For since the creation of the world God’s invisible qualities—his eternal power and divine nature—have been clearly seen, being understood from what has been made, so that people are without excuse."

Thursday 4 January 2024

Junk DNA demystified.

 

Titan vs. Aspiring Titan.

 

College is a scam?

 

Theistic Darwinism is not an oxymoron? III

 Could Laws of Nature Give Rise to Platonic Forms?


Did the laws of nature give rise to “platonic” forms, which then constrain matter (and perhaps protein formation) in certain ways that make it easier for mutation and selection to search for and find biological forms? That question is thoughtfully posed by theologian Rope Kojonen in his recent book, The Compatibility of Evolution and Design. My colleagues and I reviewed the book in the journal Religions. Over the past two days, as part of a longer series, I’ve been looking at how Kojonen’s model would work practically. 

Here is a third interpretation of Kojonen’s model, which I want to consider now.

The laws of nature gave rise to “platonic” forms, which then constrained evolution in ways that allowed selection and mutation to build biological forms. These “forms” are “an emergent consequence of the laws of chemistry and physics.” In this interpretation, the laws created these forms. The forms themselves are more than simply laws and matter under a different guise; they are non-physical. In this view, laws generated these forms, which then shaped the physical tendencies of matter such that (with possible contingency and other factors in play) they produced biological information sufficient for selection and mutation to evolve all manner of proteins, protein machines, unique human abilities, and the like.

Denton’s Structuralist View 

Biologist Michael Denton’s structuralist view says that underlying structural principles govern the form of living things. Denton argues that these principles transcend the specifics of individual species and that the structural organization of living organisms is not merely the result of random processes but reflects a fundamental and innate order in nature. But how would this interpretation understand these “forms”? Kojonen helpfully writes:

[I]f we accept the idea of a platonic library of forms that makes evolution possible, it seems that evolution no longer explains the forms themselves, but only their actualization. In Wagner’s (2016) words, before evolution, the forms “already exist in a world of concepts, the kind of abstract concepts that mathematicians explore.” (p. 152)

On this interpretation, it seems that forms are abstract concepts that exist independently of and prior to evolution. They influence physical matter and its properties. Andreas Wagner’s statement about “abstract concepts” that exist “in a world of concepts” suggests that forms are not physical entities, forces, or patterns. They are non-physical phenomena that influence physical phenomena.

A Library of Forms

This viewpoint raises a variety of questions and concerns. First, since Kojonen describes the library of forms as “an emergent consequence” of physical laws, it is unclear how the laws of physics and chemistry — including gravity, electrostatics, the strong nuclear force, and so on — could (1) produce immaterial phenomena that (2) in turn exercise their own independent causal influence on matter and energy that are (3) still governed by the laws of nature. This is in part because it’s difficult to imagine how an emergent phenomenon can have its own independent causal powers over and above its ontological substrate. It’s also difficult to imagine how non-physical phenomena affect material phenomena that are apparently also fully governed by the material substrate that underlies them.

A second worry is that Kojonen’s viewpoint would most likely be seen as believable only by those with very specific background beliefs. Some people believe that non-physical phenomena can emerge from physical phenomena. (Some believe, for example, that brain states produce mental states.) Others believe that non-physical phenomena can exert a causal influence on physical phenomena. (For example, some believe that irreducible minds can cause brain states.) But it’s quite another matter to claim that physical phenomena gave rise to non-physical things, which then exercise independent causal power back on (other) physical phenomena — which are also still under the governance of the laws of physics and chemistry. If Kojonen’s account of the origin of biological information relies on this particular set of claims, then many thinkers will understandably find it unpersuasive.

Abandon Mainstream Physics and Chemistry?

Third, and perhaps most importantly, the “platonic” interpretation of Kojonen’s model deviates dramatically from mainstream scientists’ understanding of physical and chemical laws and their effects on the natural (and biological) world. If Kojonen’s idea necessitates abandoning mainstream physics and chemistry, it’s unclear why compatibility with mainstream biology is as valuable as Kojonen argues. In that case the model’s “mainstream” status appears arbitrary.

Finally, several of the issues mentioned above apply with equal or greater force when it comes to the emergence of human capabilities. Can the laws of nature produce platonic forms to explain ourselves? Could that explain our capacity for abstract thought, volition, and the like? For many reasons, this is doubtful.

Our AI overlords jockey for position.

 

Whither the black heterodoxy? II

 

Whither the black heterodoxy?

 

0n sacrificing like a titan?

 

Yet more on fusion as the future of energy.

 

On scientism and epistemology

 

Wednesday 3 January 2024

The ministry of truth's public enemy no.1 David Berlinski holds court.

 

Yet more on why "junk DNA" is junk no more.

 Casey Luskin On Junk DNA’s “Kuhnian Paradigm Shift”


Prevailing scientific assumptions often die hard, especially when they fit so neatly into an evolutionary view of the development of life on Earth. On a new episode of ID the Future, Dr. Casey Luskin gives me an update on the paradigm shift around the concept of “junk DNA.” 

Luskin explains that intelligent design theorists have long argued against the idea that non-protein coding DNA is useless evolutionary junk, instead predicting that it serves important biological functions. Year after year for over a decade, new evidence has emerged revealing such functions and vindicating ID scientists. Luskin summarizes several recent papers that have found specific functions for non-coding DNA, such as regulating gene expression, controlling development, and influencing epigenetic processes. He then reports on the latest new evidence: the function of short tandem repeats (STRs), previously considered “junk DNA.” Luskin also discusses the work of molecular biologist John Mattick, who has written recently about the shift in thinking about “junk DNA.” Luskin suggests a new way of looking at non-protein coding regions of DNA and concludes that, far from junk, these “highly compact information suites” are essential and serve a variety of important functions in the genome. Download the podcast or listen to it here

Technology is more predictive re:biology than physics?

 Paper Digest: Standard Engineering Principles as a Predictive Framework for Biology


In 2017, professor of engineering Gregory T. Reeves and engineer Curtis E. Hrischuk published an open access paper in Journal of Bioinformatics, Computational and Systems Biology titled “The Cell Embodies Standard Engineering Principles.” They explained how the cell fulfills different sets of standard engineering principles (SEPs). This paper builds on Reeves and Hrischuk’s earlier publication that surveyed engineering models for systems biology. Once more these authors argue that engineering concepts can be used as a predictive and successful framework for biology.

Human designing and building have resulted in lists of standard engineering principles which must be followed to produce efficient, robust systems. These principles have been refined through countless engineering projects, and Reeves and Hrischuk demonstrate that these same SEPs are used in biology. They are therefore useful to biologists as an expectation framework for anticipating cellular systems:

The presence of engineering principles within the cell implies that SEPs can be used as starting point to formulate hypotheses about how a cell operates and behaves. In other words, we should pragmatically approach the cell as an engineered system and use that point of view to predict (hypothesize) the expected behavior of biological systems. We call this approach the Engineering Principle Expectation (EPE).

Several Categories of SEPs 

In the paper, several categories of SEPs are examined: general engineering principles (GEPs), hardware/software codesign principles (CDEPs), and robotic engineering principles (REPs). For each of these categories, the authors give specific examples of how the cell conforms to the set of SEPs. The authors also develop a non-exhaustive list of SEPS for chemical process control engineering (CPCEP), since a list was not available.

The comparison between cellular systems and engineered systems has strong implications for intelligent design. The reality that cells abide by the same engineering principles discovered in human design is highly significant. This finding is much better predicted on the hypothesis that biological systems have been intelligently designed than the alternate theory of a blind neo-Darwinian process giving rise to living systems.

For the category of general engineering expectations, the authors go over three principles in the main text. GEP1 states that the “development of engineered objects follows a plan in accordance with quantitative requirements.” The authors point out that the development of molecular machinery requires careful orchestration, including but not limited to decision-making, gene expression, protein synthesis, post-translational modification, the assembly of multicomponent complexes, and life cycle processes like cell division. Thus, cells embody GEP1. GEP2 states that “requirements are ranked according to cost effectiveness, and the development plan, which has an incremental structure, emphasizes the higher-ranked requirements.” This principle describes hierarchy, which results from top-down design where components are constructed, and resources expended in accordance with higher system goals. To give a biological example, the authors note the prioritization of ATP in the cell. GEP3 states that “standards are used where available and applicable with every departure from applicable standards explicitly justified.” Biological examples include how all different types of cells have conserved features such as the genetic code, ATP, a near universal central metabolism. Amino acids, nucleic acids, and some lipids might all be thought of as a cellular standard from which deviations rarely occur.

Hardware/Software Co-Design Principles

Next the authors discuss hardware/software co-design principles starting with CDEP1. This is the principle of “partitioning the function to be implemented into small interacting pieces.” In the cell, cellular regulatory networks can be decomposed into autonomous acting modules which cooperate to accomplish a function. Even the basics of cellular physiology, where unique macromolecular structures such as chromosomes, membranes, and ribosomes exist, implies partitioning of function into small interacting pieces. Thus, cells abound with examples of autonomous players carrying out a specific role towards a greater purpose. CDEP2 is the principle of “allocating those partitions to microprocessors or other hardware units, where the function may be implemented directly in hardware or in software running on a microprocessor.” This principle underlies the benefit of having a separate processer for each function. In computer systems, manufacturing constraints preclude this from being possible, but the authors point out that the cell is able to realize the ideal of having each protein or complex operating independently as a unique unit of hardware. 

Reeves and Hrischuk then describe REPs and CPCEPs. While going over each of those is beyond the scope of this article, the takeaway is that SEPs provide logic for understanding biological systems. By familiarizing themselves with these principles, biologists can enhance their research methodologies and improve their ability to predict and validate their experiments.

The Engineering Principle Expectation

Reeves and Hirschuk say that any complex system must adhere to SEPs. If it doesn’t, the outcome is catastrophic. Biological systems, which are more complex than any engineered system today, are not exceptions. When looking at a biological system, one should expect engineering characteristics. This can be thought of as the engineering principle expectation, a predictive model that can be used when looking at a biological system whose mechanistic details are not understood. Reeves and Hrischuk argue that it is crucial to apply engineering principles to understand and analyze biological systems. By doing so, researchers can gain insights into the underlying mechanisms and predict the behavior of these systems. Additionally, considering engineering principles can help in designing effective interventions or therapies for complex biological problems.

Theistic Darwinism is not an oxymoron? II

 Could Finely Tuned Initial Conditions Create Biological Organisms?


Is the arrangement of mass energy at the beginning of all things sufficient to account for the origin of life, the diversification of life, our capacity for abstract thought, volition, spiritual communion, and more? At present, there seems to be very little reason to answer in the affirmative.

However, theologian Rope Kojonen, in an attempt to wed design and evolution, allows for this interpretation in his recent book, The Compatibility of Evolution and Design. My colleagues and I reviewed the book in the journal Religions and have been discussing it further in a series here. The laws and preconditions of nature are at the heart of Kojonen’s model. They are his proposed mechanisms of design, the linchpin of his project. Yesterday, we looked at the first of three interpretations of how Kojonen’s model would actually work. Today we will look at the second:

The laws of nature simply transmitted biologically relevant information sufficient to produce all biological complexity and diversity, including new proteins, protein machines, and the like. This biologically relevant information was “built in” to the mass-energy configuration at the Big Bang. The laws of nature did not create anything but rather were the media (or “carriers”) through which biologically relevant information was eventually expressed and instantiated in everything from proteins to bacterial flagella to human beings.

A Helpful Analogy 

Laws have the capability of transmitting information in some situations, but they lack the ability to generate biological information of the kind found in DNA and proteins, as we’ve already discussed. Philosopher of science Stephen Meyer develops this point with a helpful analogy in Return of the God Hypothesis:

[I]magine that a group of small radio-controlled helicopters hovers in tight formation over the Rose Bowl in Pasadena, California. From below, the helicopters appear to be spelling a message: “Go USC.” At halftime, with the field cleared, each helicopter releases either a maroon or gold paint ball, one of the two University of Southern California colors. Gravity takes over and the paint balls fall to the earth, splattering paint on the field after they hit the turf. Now on the field below, a somewhat messier but still legible message appears. It also spells “Go USC.”

Did the law of gravity, or the force described by the law, produce this information? Clearly, it did not. The information that appeared on the field already existed in the arrangement of the helicopters above the stadium in “the initial conditions.” Gravitational forces played no role in causing the information on the field to self-organize. Gravity merely transmitted preexisting information from the helicopter formation to the field below.

The information in the message was encoded in the original position of the helicopters. The laws of nature (and gravity in particular) were merely the “carrier” of this previously created information. The second interpretation of Kojonen’s view agrees with this perspective of the laws of nature but then supposes that all the information necessary for the origin of life, diversification of life, and accounting for human cognition was present in the initial conditions (positioning of matter and energy). This concept is comparable to playing pool, where a single strike of the cue ball can knock all the balls into their respective holes due to their positions on the table. This mechanism is highly implausible when applied to life because the initial conditions that would have had to be established to create such a system are extreme.

Six Objections from Meyer

Additional problems plague this “initial conditions” idea. In Return of the God Hypothesis, Meyer summarizes six objections.

[G]iven the facts of molecular biology, the axioms of information theory, the laws of thermodynamics, the high-energy state of the early universe, the reality of unpredictable quantum fluctuations, and what we know about the time that elapsed between the origin of the universe and the first life on earth, explanations of the origin of life that deny the need for new information after the beginning of the universe clearly lack scientific plausibility.

Let’s explore this a bit more. To understand the absurdity of proposing that initial conditions could, without additional intervention, account for the facts of molecular biology, consider again the pool analogy. The idea that unfavorable thermodynamic events could be stacked into the initial conditions would be like supposing that after the cue ball hit one ball, three balls went in immediately, but after ten minutes, three more went in, and finally all the balls went into the holes. This scenario is scientifically implausible because our experience with the laws of nature is that they work consistently through time, and only agents that can work outside of the system are able to cause new events to occur. Thus, after a causation event, processes that must overcome thermodynamic barriers do not occur.

An Unknown Force in History

While the laws of nature can transmit information, the way they transmit it is consistent and constant. If the initial conditions could do something thermodynamically unfavorable after time has elapsed from an initial agent’s action, this would certainly be different from what we observe today. The laws would require the ability to select specific outcomes — i.e., to assemble specific molecules into these outcomes at specific points in time. This would require a process model running in the background and invoking the right actions at the right times — an unknown force that only seems to work at specific times in history. 

Tomorrow, we will look at the final possible interpretation of Kojonen’s model for the laws of nature.

Tuesday 2 January 2024

The obvious design of spiders

 

Theistic Darwinism is not an oxymoron?

 Physics and Chemistry Could Not Give Rise to Biology


The laws of nature provide stable conditions and physical boundaries within which biological outcomes are possible. Laws are, in effect, a chessboard. They provide a stable platform and non-negotiable boundaries. But they do not determine the movement of pieces or the outcome of the game.

Or do they? Rope Kojonen, a theologian at the University of Helsinki, argues for the compatibility of design and evolution. My colleagues Steve Dilley, Brian Miller, Casey Luskin, and I published a review of Kojonen’s thoughtful book, The Compatibility of Evolution and Design, in the journal Religions. In a series at Evolution News, we have been expanding on our response to Dr. Kojonen. Here, I will shift gears to analyze his claims about the laws of nature and their role in the origin of biological complexity and diversity.

Mechanisms of Design

The laws of nature are at the heart of Kojonen’s model. They are the mechanisms of design, the linchpin of Kojonen’s project to wed design and evolution. To evaluate his model, however, we need to be clear about what exactly his position is. Kojonen is not entirely clear about how the laws of nature (and initial conditions) are said to bring about the origin of life, the diversification of life, and human cognition. However, there seem to be at least three possible ways to interpret Kojonen’s model:

The laws of nature gave rise to “laws of forms” and other preconditions, which allowed selection and mutation (along with other processes) to create all biological complexity and diversity, including nucleotide sequences, new proteins, the assembly of protein machines, intricately engineered motility, and navigation systems, and all the unique capabilities possessed by humans. In this view, the laws of nature (in the main) have causal power or limit the possibility space enough that organisms emerge. Notably, this process isn’t deterministic per se. On this interpretation, there is room for environmental conditions to work alongside laws of nature to shape what evolutionary pathways are available, what kind of structures are easier to evolve, and so on. Insights from structuralism, convergence, and evolutionary algorithms apparently provide details about how this might work. But, the bottom line with this interpretation is that the laws of nature and environmental conditions play a generative role in bringing about flora and fauna. The laws of nature, environmental conditions, and so on don’t simply transmit biologically relevant “information” built into the initial conditions of the Big Bang. Instead, they actually create more biologically relevant information.
The laws of nature simply transmitted biologically relevant information sufficient to produce all biological complexity and diversity, including new proteins, protein machines, and the like. This biologically relevant information was “built in” to the mass-energy configuration at the big bang. The laws of nature did not create anything but rather were the media (or “carriers”) through which biologically relevant information was eventually expressed and instantiated in everything from proteins to bacterial flagella to human beings.
The laws of nature gave rise to “platonic” forms, which then constrained evolution in ways that allowed selection and mutation to build biological forms. These “forms” are “an emergent consequence of the laws of chemistry and physics.” In this interpretation, the laws created these forms. The forms themselves are more than simply laws and matter under a different guise; they are non-physical. In this view, laws generated these forms, which then shaped the physical tendencies of matter such that (with possible contingency and other factors in play) they produced biological information sufficient for selection and mutation to evolve all manner of proteins, protein machines, unique human abilities, and the like.

Let’s discuss point one, namely, that the laws of nature (and the like) have causal power or limit the possibility space enough that the diversity of plant and animal species observed today emerged from unicellular organisms. While I am personally convinced that design is evident in the very fabric of the universe and yes, in the laws of physics and chemistry, these material mechanisms do not have sufficient causal power or limit the possibilities sufficiently to explain how the diversity of organisms came to be (if these laws have stayed the same over time). To support this point, I’ll talk about the capabilities of the laws of physics and chemistry and give examples of how they currently interact with biology.

In Kojonen’s model, the laws of nature do the heavy lifting in terms of creating biological complexity. While Kojonen cites an array of other factors — e.g., environmental conditions, structuralism, convergence, and evolutionary algorithms — it’s also clear that these factors are undergirded by the laws of nature themselves. But there are limits to the creative power of the laws of nature. If it turns out that the laws have limited ability to produce biological complexity, then other factors (such as the environment, convergence, etc.) that depend upon the laws of nature likewise have limits. If Kojonen thinks that these other factors have creative powers that transcend the limits of the laws of nature, then the burden is on him to show that. Is it possible for the laws of nature to be a causal force or sufficiently constrain the possibility space?

Material Mechanisms

According to one definition, a mechanism is a process that acts on objects to produce an outcome. Here I will define a material mechanism as a process by which a physical object is acted upon by one of the physical laws. Material objects are built from the elements of the periodic table, and the laws of physics and chemistry are the constant processes that constrain how material objects behave. To understand materialistic mechanisms, let’s look at a few illustrations.

The Law of Gravity

Definition: The law of universal gravitation says an object will attract another object proportionally to the product of their masses and inversely related to the square of their distance from each other.

This law tells us how objects behave toward one another. Gravity constrains motion, whether that motion is human, planetary, or light. A complex system may also be able to detect gravity and use it as a cue. Let’s look at an example of plant growth. Leaves grow in the opposite direction of gravitational pull, but roots grow downward in the direction of gravitational pull. What causes this? Is it gravity? Definitely not. Root growth occurs through the division of stem cells in the root meristem, located at the tip of the root. Thus, root stem cells rely on gravity as a cue to be detected by their sensors, so that they know where to direct their growth. But gravity is not the mechanism that creates plant morphology. Rather, plants work within the constraints of gravity and exploit it via sensors to scaffold their architecture.

Electrostatic Laws

Definition: The electrostatic laws state that charges attract or repel with a force that is proportional to the product of their charges and inversely proportional to the square of the distance between them, depending on whether they are alike or different.

Electrostatic laws describe the attraction of positively charged ions to negatively charged ions. These laws constrain (but do not cause) the way an electrochemical gradient can be formed and work across a membrane. The charge and concentration differential across a membrane creates an electrical field. The cell then uses the potential energy of the electrical field to generate energy, convey electrical signals, and power the delivery of nutrients into the cell. The crucial point here is that electrochemical gradients are not an emergent property of the electrostatic laws. Instead, they are caused by molecular machinery. As Elbert Branscomb and Michael J. Russell say in a recent BioEssays paper, “to function, life has to take its transformations out of the hands of chemistry and operate them itself, using macromolecular “mechano-chemical” machines, requiring one machine (roughly) for each transformation; life must, in Nick Lane’s evocative phrasing, “transcend chemistry.” (Branscomb and Russell 2018)

How do electrostatic laws interface with organisms’ body plans? Organism body patterning is formed in part by bioelectrical networks, which operate across cell fields to integrate information and mediate morphological decision-making. (Djamgoz and Levin 2022) The bioelectrical networks play critical roles by regulating gene expression, organ morphogenesis, and organ patterning. This is, of course, exactly what would be necessary as an emergent property from electrostatic laws for them to have generative capacity. But these bioelectric networks no more emerge from the electrostatic laws than do cellular networks; rather, these bioelectric networks are information rich networks which carry information in a bioelectric code which can be understood by the sender and receiver. (Levin 2014)

The Periodic Table of Elements

Now the electrostatic laws, in conjunction with the design of the periodic table of elements, constrain the possible chemical space of molecule bonding arrangements. For example, based on the chemical characteristics of hydrogen and oxygen as well as the electrostatic laws, H2O has a specific bonding configuration. These mechanisms can thus explain the origin and ready formation of some simple molecules. But what about more complex molecules like those used in life? According to a paper in the journal Nature, “Chemical space and biology,” “The chemical compounds used by biological systems represent a staggeringly small fraction of the total possible number of small carbon-based compounds with molecular masses in the same range as those of living systems (that is, less than about 500 daltons). Some estimates of this number are in excess of 1060.” (Dobson 2004) This statement is consistent with our observation that complex molecules like glucose and nucleic acids result from enzymes. If one thinks that electrostatic laws and the periodic table limit the search space so that molecules like nucleic acids form on their own, then nucleic acids should form spontaneously from phosphate, nitrogen, carbon, hydrogen, and oxygen, just like water does. But this is not something that is observed. Instead, complex molecules in an appreciable quantity can only be built using enzymes (which are built using information in DNA) or in highly controlled laboratory synthesis environments. Not to mention the fact that there must be something in the natural laws that “forces” the chemistry of life to use only left-handed molecules. And if that is true, then why aren’t all molecules left-handed as this would seem to require a “rule” in the laws.

If one grants the first cell (supposing the origin of life is a miraculous event), there remain thousands of unique molecular compounds essential for the diversity of life to be selected from the chemical space. We know that many of these molecular structures are multipurpose, recyclable, and essential to other ecosystem members. The design of these molecules and the enzymes that make and break them down appears to have required foresight for the needs and functions of the ecosystem as well as an in-depth understanding of chemistry and biochemistry. Is this type of information and causal power available in the electrostatic laws or the other laws of nature?

Laws of Thermodynamics

Definition: The first law of thermodynamics says that matter and energy cannot be created or destroyed but can only change form. The second law of thermodynamics says that closed systems always move toward states of greater disorder. Open systems move toward equilibrium, where the disorder (aka entropy) of the universe is at its maximum.

The laws of thermodynamics place constraints on what biological organisms must do to remain alive. That is, organisms must capture, harness, and expend energy to maintain a state far from equilibrium. To do this, organisms must/do have incredibly designed architectures that reflect a highly advanced understanding and exploitation of the laws of nature. For example, in central carbon metabolism, energy is extracted from the molecule glucose in the most efficient way possible. But just because this biochemical pathway exhibits an architecture that is amazingly designed to leverage the constraints imposed by thermodynamics does not mean that the laws provide a mechanism by which these complex systems arose in the first place. In other words, simply because a vehicle is highly efficient does not imply that the laws of thermodynamics designed it. More likely, it means whoever designed the vehicle had a thorough understanding of thermodynamics.

Quantum Physics

Definition: Quantum physics describes the physical properties at the level of atoms and subatomic particles using the wave function, which is determined by the Schrödinger equation. The Schrödinger equation is the quantum counterpart of Newton’s second law, describing what happens in the quantum realm to systems of subatomic particles.

Schrödinger equations are linear equations, so when added, the outcome is also linear. This is very different from what is observed in the real world. For biology and complex systems, conditional branching occurs, as in the example:

If {antibiotic is detected} then (express antibiotic efflux pump). If {antibiotic decreases} then (decrease expression of antibiotic efflux pump).

This type of branching found in complex systems cannot be boiled down to a wave function. Thus, as George Ellis, a leading theorist in cosmology and complex systems, says “[T]here is no single wave function for a living cell or macroscopic objects such as a cat or a brain.” In short, the complex nonlinear world is unable to arise from a single wavefunction.

DESIRE TO SURVIVE — NOT A MATERIALIST MECHANISM

Definition: The behavior that an organism programmatically/cognitively undertakes to avoid death.

The laws of physics and chemistry do NOT include natural selection. Natural selection is an outcome of the programming of a specific goal: desire to survive. As such, I define natural selection as the change in populations that depends upon their programmed and, in some cases, cognitive capacity to survive and the environmental factors they face. Please note that this definition is different from how most people might think of natural selection, but one hopes it is more accurately aligned with how it actually works. To support this goal, the “desire to survive,” organisms have a variety of mechanisms that may include both voluntary and involuntary responses. For example, in humans, the immune system would be an example of an involuntary response (programmatically compiled) where the defenses of the body fight off invaders. An example of a voluntary response (a cognitive response) in humans might be when someone runs for their life from a bear or kills a poisonous snake. Another example of an involuntary mechanism is natural genetic engineering. In case you aren’t familiar with natural genetic engineering, it just means that cells have the capability to actively reorganize and modify their own genomes to enable survival. This involves mechanisms like transposition (movements of genetic elements within the genome), gene duplication, horizontal gene transfer (transfer of genetic material between different organisms), and other forms of genetic rearrangement. Another important example is phenotypic plasticity, which has frequently been confused for natural selection but is the ability of an individual organism to exhibit different phenotypes (observable characteristics or traits), for example, in response to changes it senses in the environment. Phenotypic plasticity occurs too rapidly to be driven by mutation and selection; thus, it is recognized as an innate adaptation algorithm embedded within an organism.

So, the desire to survive, coupled with environmental conditions and random mutations that favor some individuals over others, is “natural selection.” As natural selection relies on the agent- or life-specific mechanism of a desire to survive, it cannot account for anything related to the origin of life, only the diversification of life. The degree to which natural selection can account for the diversification of life is an active area of research, but ID proponents Douglas Axe and Brian Miller have discovered some important limits. Miller summarized decades of research on the topic of protein evolution, which relies on natural selection, in our response to Rope Kojonen. In short, they have shown that natural selection is not capable of creating a high-complexity enzyme from a random sequence of amino acids or of transforming one protein fold into a different fold without guidance. This is effectively an upper bound for what natural selection can accomplish, which bears not only on origin-of-life scenarios but also on the ability of life to diversify from a single organism into the diversity we see today.

Necessary but Not Sufficient 

The emergent properties of physics and chemistry are necessary, but not sufficient to explain the origin or diversification of biological organisms. Gravity can be used as a cue by biology to determine directionality, but gravity doesn’t make a leaf grow up or a root grow down — that happens only because a complex system is sensing, interpreting, and acting on the gravitational cue. The design of the periodic table of elements constrains the bonding pattern between hydrogen and oxygen and bestows upon water its life-giving properties, but these constraints on chemical bonding do not cause the formation of DNA or other complex molecules. Enzymes are necessary for more complex molecules to be formed at the rate required for life. The electrostatic laws describe how positive and negative charges attract one another, but these laws do not cause the formation of an electrochemical gradient across a membrane — that only happens because molecular machines harness energy to push a system away from equilibrium. In quantum physics, the linear wave function describes the wave-particle duality of matter, but it cannot account for the conditional branching observed in complex systems. 

In short, the best way to summarize the capacity of all these material mechanisms is in George Ellis’s words from his recent article, “Quantum physics and biology: the local wavefunction approach”: “The laws of physics do not determine any specific outcomes whatsoever. Rather they determine the possibility space within which such outcomes can be designed.” (Ellis 2023)

Tomorrow we will look at the second interpretation of Kojonen’s model for how the laws of nature and initial conditions could bring about life and its diversification.