Search This Blog

Saturday 20 July 2024

Life's beginning just keeps getting less and less simple.

 Study Finds Life’s Origin “Required a Surprisingly Short Interval of Geologic Time”


An article at ScienceAlert reports, “Gobsmacking Study Finds Life on Earth Emerged 4.2 Billion Years Ago.” They write, “By studying the genomes of organisms that are alive today, scientists have determined that the last universal common ancestor (LUCA), the first organism that spawned all the life that exists today on Earth, emerged as early as 4.2 billion years ago.” The article then offers an intriguing point about the rapidity with which life appeared on Earth:

Earth, for context, is around 4.5 billion years old. That means life first emerged when the planet was still practically a newborn.

The technical paper in Nature Ecology and Evolution notes that they used not fossil evidence to arrive at such an early date of life on Earth, but molecular clock techniques. The claim that life existed on Earth at 4.2 billion years ago (also noted as “4.2 Ga”) is consistent with some geological evidence (see below), but life at such an early stage is certainly not expected. Some will surely claim that it’s impossible because the heavy bombardment period which frequently saw the Earth sterilized by impacts had not yet concluded. Here’s some of the best early fossil evidence of life on Earth (Ma means “millions of years” ago):

Potential filamentous microfossils from Canada: >3750 – 4280 Ma (Papineau et al., 2022)
Microfossils from Canada: >3770 Ma (Dodd et al., 2017)
δ13C — Excess light carbon: 3.7 Ga. (Rosing, 1999, Ohtomo et al., 2014)
Stromatolites from Greenland: ~3700 Ma (Nutman et al., 2016)
Stromatolites from Western Australia: 3480 Ma (Van Kranendonk et al. 2008, Walter et al., 1980)
As you can see, most of the early fossil evidence of life on Earth is significantly younger than 4.2 Ga, but the possibility of life at 4.2 Ga is allowed by one study. Despite this potential consistency with some fossil evidence, there are multiple reasons to be skeptical of the article’s methods. 

Genetic and Phenotypic Traits

First, it infers the genetic and phenotypic traits of LUCA by assuming that biological similarity always results from common ancestry — and never from common design. This dubious logic is seen in the opening statement from the technical paper which reads, “The common ancestry of all extant cellular life is evidenced by the universal genetic code, machinery for protein synthesis, shared chirality of the almost-universal set of 20 amino acids and use of ATP as a common energy currency.” It’s true that all life uses those components (although the genetic code is not exactly universal), but this does not provide special evidence for common ancestry because the commonality of these similar features could be explained by common design due to their functional utility. After all, the optimization of the genetic code to minimize the effects of mutations upon amino acid sequences has been cited as potential evidence for intelligent design — showing that there could be good reasons for a designer to re-use the standard genetic code across many organisms.

Second, there are fundamental components of life that show great differences across different types of organisms. For example, the mechanisms of DNA replication and cell division in prokaryotes and eukaryotes are highly distinct. Ribosomes in prokaryotes and eukaryotes have fundamental differences, as one paper explains: “Structures of the bacterial ribosome have provided a framework for understanding universal mechanisms of protein synthesis. However, the eukaryotic ribosome is much larger than it is in bacteria, and its activity is fundamentally different in many key ways.” Many other examples could be given.

Third, the paper uses molecular clock methods to date the timing of LUCA, and molecular clock techniques are problematic for many reasons: they’re highly assumption-dependent and notoriously variant, unreliable, and controversial.

Intriguing Implications

All that said, it’s certainly not impossible that life was already present on Earth at 4.2 Ga. And if it were true it would have intriguing implications. As the study concludes:

The result is a picture of a cellular organism that was prokaryote grade rather than progenotic and that probably existed as a component of an ecosystem, using the WLP for acetogenic growth and carbon fixation. … How evolution proceeded from the origin of life to early communities at the time of LUCA remains an open question, but the inferred age of LUCA (~4.2 Ga) compared with the origin of the Earth and Moon suggests that the process required a surprisingly short interval of geologic time.

This suggests that not only did the origin of life occur very soon after the Earth formed but life diversified into a prokaryotic cellular form very soon as well.

The notion that life appeared on Earth shortly after it became habitable is not new. In the past, experts have said just that. For example:

Stephen Jay Gould: “[W]e are left with very little time between the development of suitable conditions for life on the earth’s surface and the origin of life.” (“An Early Start,” Natural History 87 (February, 1978))
Cyril Ponnamperuma: “[W]e are now thinking, in geochemical terms, of instant life…” (Quoted in Fred Hoyle and Chandra Wickramasinghe, Evolution from Space (New York, NY: Simon & Schuster, 1981))

Widespread Life in the Universe?

I don’t think Gould or Ponnamperuma would have anticipated life as early as 4.2 Ga. If such a timeframe is correct, however, it is extraordinary indeed. The ScienceAlert article also gets this point, stating, “This implies that it takes relatively little time for a full ecosystem to emerge … It also demonstrates just how quickly an ecosystem was established on early Earth. This suggests that life may be flourishing on Earth-like biospheres elsewhere in the Universe.” The last point — their punchline about astrobiology and the existence of life elsewhere — of course assumes that life on Earth originated naturally in the first place. It also seems to further assume that, under the right conditions, life originates easily. If it has sprung up early and easily on multiple other planets, according to this naturalist way of thinking, shouldn’t it have sprung up multiple times on Earth, too? And yet universal common ancestry denies that this is so. To all appearances, that’s a conundrum for the naturalist.

But a single origin of terrestrial life has not been established by this study. The most that has been demonstrated is that life appeared early in Earth’s history. Given the difficulties surrounding a natural origin of life, a better inference might be to take this evidence of life’s rapid appearance as evidence that it did NOT arise naturally and required intelligent design.

A bottomless pit?

 MDs Support Expanding Assisted Suicide Beyond the Terminally Ill


The myth that legal assisted suicide is about terminal illness is becoming harder to swallow. Evidence can be found in a recent survey of doctors, published in the Journal of Cutaneous Oncology, which asked doctors this question: : “In addition to adults with terminal illnesses, [which] other groups of patients” should be eligible for MAID?

The answers are disturbing. From the survey:

Adults with intractable psychiatric conditions: 30 percent
Children with terminal conditions: 45 percent
Adults with intractable chronic pain: 55 percent
Adults with late stage dementia: 70 percent
Adults in persistent vegetative state: 80 percent
Majorities of doctors surveyed answered that they would be willing to be present when the deed is done. Here’s the question: “If it were available (or is available), what is your willingness to be present when patients took MAID drugs?” Again, disturbing results, with 61 percent either probably or definitely, yes:

Definitely not: 6 percent
Probably not: 33 percent
Probably yes: 39 percent
Definitely yes: 22 percent
That’s only a hop, skip, and a jump to willingness to do the deed. And no doctors would definitely refuse to “refer for MAID.”

A Terrifying Survey

This survey should terrify anyone who believes in Hippocratic medical values. And it illustrates the impact that constant boosting of assisted suicide in the media and popular culture, utilitarian bioethics training in medicine, and the corrupting cultural paradigm shift in which many believe eliminating suffering should be the prime directive of society, has had on the professional sector that should be most protective of vulnerable patients.

It should be noted that the push is already on to expand eligibility beyond the dying. That is the plan, you know, just as happened in other countries. This survey is but one example of the softening of the ground. Also, the California bill filed but not passed this year that would have opened doctor-prescribed death way beyond the terminally ill.

Indeed, that is the debate we should be having — whether euthanasia should be available to broad categories of suffering people — not the phony-baloney dishonest pretense that assisted suicide/euthanasia is meant over the long haul to be a tightly restricted practice reserved for the dying.

There will be consequences. If this drift continues, we will one dark day end up like Canada, where more than 15,000 patients were killed by doctors in 2023, in a milieu in which cancer patients who couldn’t obtain proper oncology care were euthanized, and where people with disabilities report being pressured by medical personnel and social workers into “choosing” death.


Friday 19 July 2024

File under "well said" CIX.

"Most people use statistics like a drunk man uses a lamppost; more for support than illumination "

Andrew Lang

Thursday 18 July 2024

On separating the wheat from chaff re:science.

 Three Genuine Tells of Junk Science


Capital Research Center reports on non-profit organizations. Managing editor Jon Rodeback offers three tells of junk science: He identifies “settled,” “consensus,” and “scientific study.” On that last topic, he notes,
                                 While scientific studies are essential to scientific research, a single study by itself is far from definitive, and not all scientific studies are created equal. The findings of a single study need to be tested and retested, no matter how promising they seem. In fact, the most promising findings probably need more rigorous testing to ensure that a bias toward a desired outcome did not influence the research.

In addition, the more a study or report is entangled with politics and government funding, the less scientific and less reliable its results will likely be. I have personally witnessed how a government report was vetted by the various offices in a federal department and offending passages were removed or rewritten so as to not cast a particular federal office in a bad light—usually not to correct any inaccuracy in the report, but to obscure inconvenient data and conclusions. 

JON RODEBACK, “THREE TELLS OF JUNK SCIENCE,” CAPITAL RESEARCH CENTER, JUNE 26, 2024

This comes to us hard on the heels of philosopher Massimo Pigliucci’s effort to identify “pseudoscience,” in which he suggested that the solution is to rely on him and on sites he approves of. That’s certainly not an answer for everyone

How Desired Results Are Obtained

Perhaps the main thing to see here is that the many current problems in peer-reviewed science in recent years have diminished the reasons we should simply trust it. Business prof Gary Smith Wrote late last year about the methods used to achieve a desired — but not necessarily natural — result:

One consequence of the pressure to publish is the temptation researchers have to p-hack or HARK. P-hacking occurs when a researcher tortures the data in order to support a desired conclusion. For example, a researcher might look at subsets of the data, discard inconvenient data, or try different model specifications until the desired results are obtained and deemed statistically significant — and therefore publishable. HARKing (Hypothesizing After the Results are Known) occurs when a researcher looks for statistical patterns in a set of data without any well-defined purpose in mind beyond trying to find a pattern that is statistically significant — and therefore publishable. P-hacking and HARKing both lead to the publication of dodgy results that are exposed as dodgy when they are tested with fresh data. This failure to replicate undermines the credibility of published research (and the value of publications in assessing scientific accomplishments).

Even worse than p-hacking and HARKing is complete fabrication. Why torture data or rummage through large databases when you can simply make stuff up? An extreme example is SciGen, a random-word generation program created by three MIT graduate students. Hundreds of papers written entirely or in part by SCIgen have been published in reputable journals that claim they only publish papers that pass rigorous peer review.

More sophisticated cons are the “editing services” (aka, “paper mills”) that some researchers use to buy publishable papers or to buy co-authorship on publishable papers. These fake papers are not created by randomly generated words but they may be entirely fabricated or else plagiarized, in whole or in part, from other papers. It has been estimated that thousands of such papers have been published; it is known that hundreds have been retracted after being identified by research-integrity sleuths.

If anything, Smith notes, the problems will likely get worse because chatbots (large language models or LLMs), introduced only about two years ago, can generate rubbish research papers with far greater efficiency and quality than the methods people complained about five years ago.

It’s not an opinion that science is becoming less trustworthy; it’s an everyday fact, if we go by what we are told about the floods of computer-written junk papers and the problems Smith identifies. And the public’s deepening loss of trust in science is also a fact.

Grounds for Hope

Historically, interest and investment in science — and reliance on it — has waxed and waned. It has increased when people see an actual benefit. But if, over time, “studies show” mainly amounts to a publicity campaign for some project approved by powerful interests, with no practical benefits to recommend it, we can expect public trust to decline further. And blaming the public for not believing what’s not believable is hardly a useful response.

Take heart! There have been periods when science stagnated, then underwent major reforms — usually when it was in a rut. Like now, for many disciplines.


                               

Toward the ultimate design filter?

 Building a Better Definition of Intelligent Design


1. Previous Efforts to Define Intelligent Design

Existing definitions of intelligent design, whatever their imperfections, have been good enough to inspire a growing body of scientific research and philosophical reflection on the role of intelligence in nature. Under these definitions, intelligent design has become an active and fruitful area of inquiry. Even so, I want in this article to lay out why existing definitions, all of which overlap and are roughly congruent, need improvement. And finally, I want to offer a new and improved definition of intelligent design. 

None of the existing definitions of intelligent design is wrong per se. They hit the target, yet they are not squarely in the bullseye. For my colleagues in the field, these definitions haven’t slowed us down. As it is, the best science happens when scientists reflect deeply about problems and creatively invent new ways to think about and resolve them. Such advances occur without scientists obsessively referring to what some textbook definition says about their field of inquiry. 

Definitional change in science is par for the course: As paradigms shift because of scientific advances, textbook definitions change. Compare heat, which earlier had been defined as a weightless, invisible fluid (the caloric theory) and subsequently was defined as the kinetic energy of molecules (the kinetic theory). Even as paradigms are refined rather than replaced, key definitions get refined. Thus the definition of Bohr’s atom gave way to the definition of Dirac’s atom.

In any case, given how controversial it is to look for and claim to find evidence of intelligent activity in nature, especially in regard to cosmological and biological origins, proponents of intelligent design cannot evade what exactly they mean by intelligent design. Can it rightly be regarded as a scientific theory? Is it coherent, hanging together logically? Is it a religious doctrine, or does it merely have religious implications? Should it have traction in education, the public square, and the courtroom? Such questions are widely asked, and their answer depends on how we define intelligent design. 

There is currently no single standard definition of intelligent design held by everyone in the ID community, though all the definitions are quite close in meaning. The one that until recently I used in my public lectures and that served as my working definition of intelligent design is this: Intelligent design is the study of patterns in nature that are best explained as the product of intelligence. Intelligent design is thus about identifying certain special types of patterns or features in nature and showing how they provide compelling scientific evidence of intelligent causation. 

But note, intelligent design is not just about finding evidence of actual design in nature. Once design is confirmed to exist in nature, a raft of research questions confront the design theorist. I list some of these here to underscore that existing definitions of intelligent design have sufficed to spur a full-fledged ID research program. Note that all these questions can be posed and make perfect sense without getting into the intention or identity of any putative designer. In fact, these questions make perfectly good scientific sense even if one adopts a fictionalist view of any designer. But of course there’s nothing here either to stop a realist view of any designer. Here, then, is a partial list of such research questions: 

Classification — What types of natural systems exhibit compelling evidence for design?
Functionality — What are a designed object’s main and subsidiary functions?
Constraints — What are the constraints within which a designed object functions well and outside of which it breaks?
Evolvability — How much can a designed system evolve with and without externally applied information. 
Transmission — How does an object’s design trace back historically? What is the causal narrative by which the object arose? 
Information tracking — What are the informational inputs by which a designed object is produced? What is its ultimate informational source?
Information density — How densely is information nested in a designed object?
Construction — How was a designed object actually constructed?
Reverse-engineering — Absent how a designed object was actually constructed, how could it have been constructed?
Perturbation — How has the original design been modified and what factors have modified it?
Restoration — Once perturbed, how can the original design be recovered?
Optimality — In what way is the design optimal?

I formulated what until recently was my working definition of intelligent design around 2012. Yet my colleagues in the intelligent-design movement and I had been using variations of it since the 1990s. Speaking for myself, around 2000 I would define intelligent design as the study of signs of intelligence. And earlier still I emphasized the empirical detectability of design (via such signs or patterns). 

Rather than chronologically list the various definitions of intelligent design that my colleagues and I have proposed over the years, let me simply underscore some of the key themes in existing definitions. The following list is meant to be representative, not exhaustive. 

Empirical detectability. Design in nature is not a vague intuition about whether something looks to be the product of intelligence. We can know it when we see it.
Triggering features. Certain features reliably trigger design inferences, providing evidence for design. Such features are often described in terms of patterns, information, signs, or signatures.
Irreducible complexity. Introduced by Michael Behe, this has become a key triggering feature, identifying design for a complex system that consists of numerous interrelated parts each necessary for the system’s primary function. 
Specified complexity. Elaborated by myself, this has likewise become a key triggering feature, identifying design when a highly improbable event (complexity) matches a recognizable pattern (specification). 
Origins vs. operations science. Intelligent design distinguishes origins science, which answers historical questions about how features in nature originated, from operations science, which characterizes ongoing processes observable now.
Inference to the best explanation. Inferring design presupposes a playing field of competing explanations, determining whether design is indeed the best explanation on such grounds as empirical support and causal adequacy.
Separation of causes. Intelligent design separates unintelligent or blind causes on the one hand, typically described in terms of chance and necessity, from intelligent or purposive causes on the other, described in terms of design. 
We may therefore think of my 2012 definition of intelligent design (i.e., the study of patterns in nature best explained as the product of intelligence) as a shorthand for all of the above. Crucial here for design theorists is that compelling empirical evidence could in principle exist for design in nature. Design theorists therefore regard intelligent design as a scientific rather than a religious form of inquiry.

Whatever improvements may be made to this definition of intelligent design, the definition as it stands now is in the right ballpark. For most practical purposes, this definition characterizes how we detect intelligence in nature, namely, through intelligence-signifying patterns. Archeology, the search for extraterrestrial intelligence, forensic science, and many other special sciences accord with this definition. 

2. The Blind-Watchmaker Dialectic

Nevertheless, the current standard definition of intelligent design is problematic. Two main problems confront it. First, it provides no guidance or rationale for explaining phenomena that don’t exhibit intelligence-signifying patterns. Second, it fails to distinguish intelligence and design, treating them as synonymous, even though distinguishing between the two is important and needs to figure into any definition of intelligent design.

To see what’s at stake with this first point, consider the following quote from Richard Dawkins’s The Blind Watchmaker: “Biology is the study of complicated things that give the appearance of having been designed for a purpose.” Dawkins might be happy to concede that in disciplines outside biology, patterns may exist that decisively confirm intelligence. Yet Dawkins is convinced that no such patterns exist in biology, except as might have been put there by human or alien bioengineers.

But leaving aside such human or alien bioengineering, Dawkins rejects that any real design is present in biology. Dawkins’ watchmaker is blind, incapable of real design. Natural selection, operating without intelligent guidance, can for him produce all the features of biological systems that give them the appearance of design — but apart from any actual design.

The current standard definition of intelligent design, when confronted with Dawkins’ blind watchmaker argument, thus leads to a problematic dialectic. This dialectic pits intelligent or teleological causes against unintelligent or blind causes. The intelligent causes produce patterns best explained as the product of intelligence. The unintelligent causes, such as natural selection acting on random variations, produce patterns that appear to be designed although their explanation requires no appeal to actual intelligence. 

This dialectic is problematic because it suggests a natural world in which intelligent and unintelligent causes mix indiscriminately, with no principled way of teasing them apart. Our natural tendency is thus to give precedence to one type of cause over the other. Dawkins, for instance, in holding to an atheistic and materialistic view of nature, will see the fundamental causes operating in nature as unintelligent, with intelligence, as it arises in nature, being merely a byproduct of unintelligent causes that produce beings like ourselves through a blind evolutionary process. 

Intelligence is thus for Dawkins downstream of unintelligence, and so there simply cannot be any patterns in nature that point to an intelligence not ultimately reducible to blind natural forces. Intelligent design, especially insofar as it claims to find real intelligent causation behind biology, is thus an impossibility for Dawkins because the only intelligences that exist for him are evolved intelligences, and intelligent design claims to discover unevolved intelligences.

Theism, obviously, provides the most ready alternative to Dawkins’x atheism. For our purposes, we can construe theism quite broadly to include (and I’m not being exhaustive here) pantheism, panentheism, deism, and traditional theism (as in Judaism, Christianity, and Islam). With theism of any stripe, there are no blind or unintelligent causes per se. Even with process theology and open theism, in which chance processes as exhibited in quantum indeterminacy are beyond the full knowledge and control even of God, there is still the sense that God is using randomness in the service of teleology, and so even in these theologies where God is less than omniscient and omnipotent, there are no fully blind causes to speak of.

3.Primary and Secondary Causation

As representative of the theistic response to Dawkins’s blind-watchmaker dialectic, which pits intelligent against unintelligent causes, I want to focus on the Aristotelian-Thomistic distinction between primary and secondary causation. Primary causation denotes the direct action of God, who is seen as the ultimate source of all being and activity in the universe. God, as the first cause, initiates and sustains all existence and causal powers. Primary causation is rooted in Aristotle’s notion of the unmoved mover and further developed by Thomas Aquinas, for whom God’s will and power are the fundamental cause of everything that happens. Divine causation is not just a one-time occurrence but an ongoing, continuous act of creation and sustenance, ensuring that all things remain in existence and function according to their nature.

Secondary causation, on the other hand, denotes the activity of created beings, which operate within the order established by the primary cause. In this view, creatures are genuine causes of effects in the world, but their causative power is derived from and dependent on God’s primary causation. For instance, in lighting a fire, a person acts as a secondary cause, while God’s primary causation ensures the existence and properties of both the person and the fire, as well as the underlying laws of nature that make this activity possible. This distinction allows for a coherent integration of divine omnipotence with the real efficacy of created agents.

The Aristotelian-Thomistic tradition maintains that while God is the ultimate cause of everything, secondary causes play a true and significant role within the divinely created order of the world. This view of secondary causation implies that all cause and effect in the world ultimately aligns with the divine will, in turn implying that there are no truly blind or unintelligent causes, in contradiction to the materialist atheist, who claims that ultimately there are only blind or unintelligent causes. 

Every action performed by secondary causes is therefore, within the Aristotelian-Thomistic tradition, part of God’s purposeful plan for creation. God’s omniscience and omnipotence extend to all creation details, making even seemingly random events part of a divine plan. But note: secondary causation, though instituted by God, is, unlike primary causation, limited in what it can accomplish. Jesus walking on water, turning water into wine, and resurrecting from the dead are beyond the reach of secondary causation. The limits to secondary causation thus make room for miracles, where God’s primary causation intervenes to surpass the capabilities of secondary causation.

Even though this brief overview of the Aristotelian-Thomistic understanding of primary and secondary causation may seem like a digression, it underscores the need for additional clarification in our standard definition of intelligent design. If an object or event exhibits a pattern that is best explained as the product of intelligence, what are we to make of objects or events that don’t exhibit such patterns? 

From an Aristotelian-Thomistic perspective, anything and everything exhibits the divine intelligence. Thus, it would seem that within this perspective, identifying patterns in nature that signify intelligence, as the standard definition of intelligent design would have it, is useless and misleading. The followers of Aristotle and Thomas already know that everything exhibits intelligence, and intelligent design thus seems to offer no additional insight.

Nonetheless, the idea of intelligence-signifying patterns, which is inherent in the current standard definition of intelligent design, has proven itself to have practical value. Did so-and-so die of natural causes or as a result of foul play? Did so-and-so write that essay unassisted or by plagiarizing? Do the marks on that rock result from wind and erosion or the intentional carving of letters (as in Rosetta Stone)? In such examples, there’s an appeal to intelligence/design that seems vastly stronger and more insistent on the one hand than on the other. Even if neither primary nor secondary causes can capture this difference, it’s a difference that needs to be captured. 

To elaborate on this point, consider SETI, the search for extraterrestrial intelligence. SETI researchers look for signs of intelligence from outer space. To date they have found no radio signals that exhibit intelligence-signifying patterns. But now imagine they do find such radio signals, such as technosignatures that can reasonably be ascribed only to technologically advanced civilizations. Most people would describe radio signals that fail to confirm SETI as random, those that do confirm it as designed. Yet even if in some ultimate sense intelligence lies behind both signals, there seems an important distinction to be made here between these two types of signals. Accordingly, if existing definitions of intelligent design fail to adequately capture this distinction, then we need a better definition.

4.Matter vs. Information

In distinguishing between the seemingly random and the clearly nonrandom (as in the examples just considered), Aristotle provides a way forward. He does so through two distinctions of his own, the one between matter and information, the other between nature and design. Let’s start with the first distinction. Matter is raw stuff that can take any number of shapes. Information is what gives shape to matter, fixing one shape to the exclusion of others. Both the words matter and information derive from Latin. Matter (from the Latin noun materia) initially referred to the raw timber used in building houses. Later it came to mean any raw stuff or material with the potential to assume different shapes, forms, or arrangements. Aristotle of course wrote in Greek, and his equivalent for matter was hylē (ὕλη). 

Information (from the Latin verb informare) means to give form or shape to something. Aristotle’s Greek equivalent was the noun morphē (μορφή), to denote form, and the verb morphoō (μορφόω), to denote the activity of forming, shaping, or molding, and thus of informing. Unlike passive or inert matter, which needs to be acted upon, information is active. Information acts on matter to give it its form, shape, arrangement, or structure. 

Note that I’m using terms like form, shape, and arrangement interchangeably. Aristotle would distinguish form, in the sense of substantial form or essence, from mere shape or arrangement. It’s enough for my purposes, however, that shape or arrangement be correlated with form in Aristotle’s sense. Thus, for marble to express the form (in Aristotle’s sense) of Michelangelo’s David, it must be precisely shaped or arranged.

The relation between matter, with its potential to assume any possible shapes, and information, with its restriction of possibilities to a narrow range of shapes, is fundamental to our understanding of the world. Certainly, this relation holds for all human artifacts. This is true not only for human artifacts composed of physical stuff (like marble statues of David), but also for human artifacts composed of more abstract stuff (like poetry and mathematics). 

Indeed, the raw material for many human inventions consists not of physical stuff but of abstract stuff like alphabetic characters, musical notes, and numbers. For instance, the raw material for a Shakespearean sonnet consists of the twenty-six letters of the alphabet. Just as a statue of David is only potential in a slab of marble, so a Shakespearean sonnet is only potential in those twenty-six letters. It takes a Michelangelo to actualize the statue of David, and it takes a Shakespeare to arrange those twenty-six letters appropriately so that one of his sonnets emerges.

The relation between matter and information that we are describing here is old and was understood by the ancient Greeks, especially by the Stoics, who understood God as logos, the active principle that brings order to the cosmos. In any case, nothing said so far about the relation between matter and information is especially controversial. The world consists of raw material waiting to be suitably arranged. On the one hand, there’s matter, passive or inert stuff waiting to be arranged. On the other, there’s information, an active principle or agency that does the arranging. This distinction offers a perfectly straightforward and useful way of carving up experience and making sense of the world. Much of our knowledge of the world depends on understanding this relation between matter and information.

5. Nature vs. Design

In the relation between matter and information, the crucial question is how information gets into matter. For Aristotle, there were two ways to get information into matter: by nature and by design. In the examples considered in the last section, we focused on the activity of a designing intelligence (a sculptor or writer) informing or giving shape to certain raw materials (a slab of marble or letters of the alphabet). But designing intelligences are not the only causal powers capable of structuring matter and thereby imparting information. Nature, too, is capable of structuring matter and imparting information.

Consider the difference between raw pieces of wood and an acorn. Raw pieces of wood do not have the power to assemble themselves into a ship. For raw pieces of wood to form a ship requires a designer to draw up a blueprint and then arrange pieces of wood, in line with the blueprint, into a ship. But where is the designer that causes an acorn to form into a full-grown oak tree? There isn’t any. The acorn has the power to transform itself into an oak tree.

Nature and design therefore represent two different ways of producing information. Nature produces information internally. The acorn assumes the form it does through capacities internal to it — the acorn is a seed programmed to produce an oak tree. On the other hand, a ship assumes the form it does through capacities external to it — a designing intelligence imposes a suitable structure on pieces of wood to form a ship. 

Not only did Aristotle know about the distinction between information and matter, but he also knew about the distinction between design and nature. For him, design consists of capacities external to an object. Design brings about form with outside help. On the other hand, nature consists in powers internal to an object. Nature brings about form without outside help. Thus in Book XII of his Metaphysics Aristotle wrote, “Design is a principle of movement in something other than the thing moved; nature is a principle in the thing itself.” In Book II of his Physics Aristotle referred to design as completing “what nature cannot bring to a finish.” 

The Greek word here translated design is technē (τέχνη), from which we get our word technology. The corresponding Latin is ars/artis, from which we get our words artisan and artifact. In translations of Aristotle’s work, the English word most commonly used to translate technē is art in the sense of artifact. Design, art, and technē are thus synonyms. The essential idea behind these terms is that information is imparted to an object from outside the object, and that the material constituting the object, apart from that outside information, does not have the power to assume the form it does. Thus raw pieces of wood do not by themselves have the power to form a ship.

But what if raw pieces of wood did have such a power of self-organization? In Book II of his Physics Aristotle raised and answered that question: “If the ship-building art were in the wood, it would produce the same results by nature.” In other words, if raw pieces of wood had the capacity to form ships, we would say that ships come about by nature. 

The Greek word here translated “nature” is physis (φύσις), from which we get our word physics. The Indo-European root meaning behind physis is growth and development. Nature produces information not by imposing it from outside but by growing or developing informationally rich structures from within. The acorn is emblematic here. Unlike wood that needs to be fashioned by a designer to form a ship, acorns produce oak trees naturally — the acorn simply needs a suitable environment in which to grow.

In light of Aristotle’s distinction between nature and design, the central question that any science of intelligent design needs to resolve when attempting to explain some system in the natural world is therefore this: Is the system self-sufficient in the sense of possessing within itself all the resources needed (nature) to bring about the information-rich structures it exhibits or does it also require some contribution from outside itself (design) to bring about those structures? 

Aristotle claimed that the art of ship-building is not in the wood that constitutes the ship. We’ve seen that the art of sonnet-composing is not in the letters of the alphabet. Likewise, the art of statue-making is not in the stone out of which statues are made. Each of these cases requires a designer. A successful science of intelligent design would demonstrate that the art of building certain information-rich structures in nature (such as biological organisms) is not in the physical stuff that constitutes these structures but requires the input of information from outside the system.

6. The Connection Between Intelligence and Information

Up to now, we’ve only discussed the classical conception of information as developed by Aristotle. The modern conception of information overlaps with Aristotle’s, but it is better adapted to contemporary science and mathematics. Also, it comes without a full-blown metaphysics. The modern conception is drawn from Shannon’s communication theory and subsequent work on the mathematical theory of information. The key idea underlying this conception of information is the narrowing of possibilities. Specifically, the more that possibilities are narrowed down, the greater the information.

For instance, if I tell you I’m on planet earth, I haven’t conveyed any information because you already knew that (let’s leave aside space travel). If I tell you I’m in the United States, I’ve begun to narrow down where I am in the world. If I tell you I’m in Texas, I’ve narrowed down my location further. If I tell you I’m forty miles north of Dallas, I’ve narrowed my location down even further. As I keep narrowing down my location, I’m providing you with more and more information.

Information is therefore always exclusionary: the more possibilities are excluded, the greater the information provided. As philosopher Robert Stalnaker (Inquiry, p. 85) put it: “To learn something, to acquire information, is to rule out possibilities. To understand the information conveyed in a communication is to know what possibilities would be excluded by its truth.” I’m excluding much more of the world when I say I’m in Texas forty miles north of Dallas than when I say I’m merely in the United States. Accordingly, to say I’m in Texas north of Dallas conveys much more information than simply to say I’m in the United States.

The etymology of the word information captures this exclusionary understanding of information. We already discussed its etymology in section 3 on the Aristotelian relation between matter and form. To elaborate on it further, the word information derives from the Latin preposition in, meaning in or into, and the verb formare, meaning to give shape to. Information puts definite shape into something. But that means ruling out other shapes. Information even in its classical conception thus narrows down the shape in question. A completely unformed shmoo, such as Aristotle’s prime matter, is waiting in limbo to receive information. Only by being informed will it exhibit a definite structure.

Aristotle’s conception of information overlaps with but is also separate from the modern conception of information. Aristotle’s conception, as we saw in section 3, is tied to his theory of formal causation, in which information is understood as the cause that gives shape to matter and makes a material object what it is. In Aristotelian thought, the formal cause determines an object’s structure and properties, defining its essence.

For Aristotle, information was thus more than a narrowing of possibilities. Instead it was an intrinsic organizing principle that turns matter into a coherent and purposeful entity. Yet, the modern conception of information, though not wedded to Aristotle’s understanding of formal causation, is nonetheless consistent with it. Aristotelian information, by defining a thing’s essence, makes it this and not that. It is thus inherently exclusionary, which aligns with information in its contemporary sense as the narrowing down of possibilities.

Let’s next turn to intelligence. The fundamental intuition of information as narrowing down possibilities matches neatly with the concept of intelligence. The word intelligence derives from two Latin words: the preposition inter, meaning between, and the verb legere, meaning to choose. Intelligence thus, at its most fundamental, signifies the ability to choose between. But when a choice is made, some possibilities are actualized to the exclusion of others, implying a narrowing of possibilities. And so, an act of intelligence is also an act of information.

If we trace the etymology of intelligent back still further, the l-i-g that appears in it derives from the Indo-European root l-e-g. This root appears in the Greek verb lego, which by New Testament times meant to speak. Its original Indo-European meaning, however, was to lay, and from there to pick up and put together. Still later, it came to mean to choose and arrange words, and from there to speak. The root l-e-g has several variants, appearing as l-o-g in logos and as l-e-c in intellect and select. 

As a side note, this brief etymological study reveals that Darwin’s great coup was to coopt the term selection, previously associated with the conscious choice of purposive agents, and saddle it with the term natural. In the term natural selection, Darwin therefore intended to recover all the benefits of choice as traditionally conceived, and yet without requiring the services of an actual intelligence. Thus to this day we read such claims, as by Francisco Ayala, that Darwin’s greatest discovery was to give us “design without designer,” which Dawkins described as the appearance of design without actual design.

Darwinists, in coopting the term selection, obfuscate the idea of choice. Choice is a directed contingency that actualizes some possibilities to the exclusion of others in order to accomplish an end or purpose. A synonym for the word choice is decision, with the corresponding verb forms being choose and decide. The words decision and decide are likewise from the Latin, combining the preposition de, meaning down from, and the verb caedere, meaning to cut off or kill (compare our English word homicide). 

Decisions, in keeping with this etymology, raise up some possibilities by cutting down, or killing off, others. When you decide to marry one person, you cut off all the other people you might marry (assume the marital relationship is one-to-one). An act of decision is therefore always a narrowing of possibilities. It is an informational act. But given the definition of intelligence as choosing between, it is also an intelligent act.

7. A New Information-Based Definition of Intelligent Design

would otherwise need to be externally inputted. All the necessary information is thus said to reside in the environment (whether front-loaded or self-generated). The environment thus becomes an unlimited source of information that dispenses with all need for design.

I call this maneuver of expanding a system so that it coincides with an informationally plenipotent environment the environmental fallacy. It is a fallacy because 

it illegitimately discounts the integrity of systems, which must be considered on their own terms and which may not be absorbed willy-nilly into larger supersystems simply to avoid the problem of design; and 
it simply presupposes that the environment always has sufficient informational resources to defeat design rather than that the environment always needs its actual internally-generated informational resources accurately assessed to determine whether they are in fact adequate to defeat design and, if not, to allow for a valid inference to design. 
The choice of system to analyze for evidence of design typically adheres to a Goldilocks principle: it needs to be not too big, and not too small, but just right, where just right means that the system allows for an accurate assessment of whether the information output in question is indeed internally generated or the result of externally applied, intelligently sourced information (design). The key types of systems in biology that give evidence of design in this sense are those that exhibit irreducible and specified complexity.

Capacities. A key term in this new definition of intelligent design is capacities. This term refers to the causal powers of systems to produce certain effects or outputs. Systems are able to do certain things but not others. An otherwise functional car with an internal combustion engine but without gas does not have the capacity to drive; with gas, it does. Aristotle understood capacities in terms of his distinction between potentiality and actuality. This distinction fit within his metaphysics for characterizing how entities undergo change. Yet for the sake of our present definition of intelligent design, we only need a conception of capacity that takes causal powers seriously. Aristotle certainly qualifies here, but other approaches do too, such as scientific realism. 

Philosopher of science Nancy Cartwright articulated a conception of capacities that is congenial to our newfound definition of intelligent design. She did this in her book Nature’s Capacities and Their Measurement (Oxford, 1989). There she contended that scientific laws and observed regularities are not merely descriptions of passive events but are underpinned by capacities that can manifest differently depending on context. Cartwright challenged the view that the laws of nature are universally applicable without exception, proposing instead that these laws describe tendencies that are actualized when the relevant capacities are triggered in the appropriate circumstances. For the purposes of this new definition of intelligent design, Cartwright’s view of capacities elucidates causal powers for systems and how systems interact to produce observed phenomena.Chance and Probability. The terms chance and probability do not appear in this definition of intelligent design, but they are there implicitly. Capacities, understood as causal powers, can be described scientifically/mathematically in terms of chance and probability. Thus, to say that a system has the capacity to produce a given output is to say that the system, left to itself, will with high probability produce the output. Alternatively, to say that a system does not have the capacity to produce a given output is to say that the system, except with external input, will with low probability produce the output.

In such a probabilistic approach to capacities, chance then simply describes a system’s probabilistic behavior in producing given outputs. As such, chance says nothing about whether the underlying causal processes are teleological or ateleological. This approach to chance is compatible with Aristotle’s view that all causality is ultimately teleological (chance for him being the incidental collision of independent causal chains, all of which are teleological). But this approach to chance is also compatible with Jacques Monod’s view (in Chance and Necessity) that all causality is ultimately ateleological. Chance, as implied in this new definition of intelligent design, is then simply a non-prejudicial way of describing the probabilistic behavior of a system. 

Intelligent actions are clearly responsible for the chance behavior of some systems. Take, for instance, high school seniors looking to go to college next fall. All the decisions by prospective students to apply to colleges as well as all the decisions by the college admission committees to accept or reject their applications are under full conscious intelligent control. Yet well-defined probability distributions characterize application numbers as well as acceptance and rejection numbers for given schools (Caltech and Harvard currently being the most competitive). 

Note that the use of probabilities to trace causal relationships is well established. Patrick Suppes, Nancy Cartwright, and Judea Pearl have all made compelling arguments for how to get causes from probabilities. The canard “correlation is not causation” is overworked and too often cloaks a self-imposed ignorance. As Judea Pearl convincingly argues in The Book of Why, it is entirely rational to assert that we know the cause of something using probabilistic/statistical arguments that sift both supporting evidence and contrary evidence. 

Probabilistic causality is understood in the first instance through probabilistic dependence. At its most basic, for A to be a cause of B, there must be a probabilistic dependence between them. Specifically, the occurrence of A should increase the probability of B occurring. Formally, P(B∣A) > P(B). This idea can be developed further using causal diagrams, counterfactual analyses, and Bayesian reasoning. But the point to note in connection with our new definition of intelligent design is that the capacities of systems can be modeled probabilistically, as can changes in the capacities of systems through the infusion of novel external information.

Information. Information figures prominently in this definition. It is there not as a metaphor but as a real entity capable of being measured through the tools of the modern mathematical theory of information. Where earlier definitions of intelligent design emphasized intelligence-signifying patterns that are best explained as the product of intelligence, the new definition emphasizes informational outputs that are best explained by prior externally applied informational inputs arising from intelligence, the outputs and inputs being associated with particular systems. The references to patterns and information in the earlier and later definitions of intelligent design are entirely parallel.

The mathematician Norbert Wiener, in his book Cybernetics, remarked that “information is information, not matter or energy.” It’s important to keep this point in mind when working with the new definition of intelligent design. The capacities of many systems are gauged in terms of energy and matter. Earlier, we considered the example of a car with an internal combustion engine that lacked gas in the tank. Such a car lacks the capacity to move itself. Yet, once the tank is filled with gas, it will have the capacity to move. The input here (gas in the tank) that explains the output here (the car being able to move) is, however, not informational but energetic.

The focus in our new definition of intelligent design is squarely on information. Information may require some energetic involvement. For instance, an old-fashioned transistor radio does not have the capacity by itself to play a recorded musical performance. Instead, it requires a signal encoding that performance to be transmitted to the radio. That signal will use energy, but it will be a directed energy that is also a carrier of information. Such inputted information will be best explained as intelligently inputted external information (i.e., design). 

But note, a contemporary digital radio might have a memory unit that contains mp3 files of recorded musical performances. Such a radio, unlike an old-fashioned transistor radio, might therefore have the capacity by itself to play music without external informational input, the music being stored on a memory chip in the radio. This example underscores the need to determine what the actual capacities of systems are whose design stands in question. 

Although in many instances the external application of information by intelligence involves energy, we need to avoid making observed energetic pathways a precondition for intelligently inputted external information. Informational relationships do not require energetic relationships. As Fred Dretske remarked in Knowledge and the Flow of Information (MIT, 1981, p. 26):
                      It may seem as though the transmission of information … is a process that depends on the causal inter-relatedness [think physical causality in terms of energy] of source and receiver. The way one gets a message from s [source] to r [receiver] is by initiating a sequence of events at s that culminates in a corresponding sequence at r. In abstract terms, the message is borne from s to r by a causal process which determines what happens at r in terms of what happens at s. The flow of information may, and in most familiar instances obviously does, depend on underlying causal processes [again, think physical causality and energy]. Nevertheless, the information relationships between s and r must be distinguished from the system of causal relationships [again, think energy] existing between these points.

The key takeaway here for our new definition of intelligent design is that informational relationships take precedence over energetic relationships. We can know, for instance, that a “magic” penny whose coin flips spell out the cure for cancer in Unicode (1 for heads, 0 for tails) is under intelligent external control. Indeed, systems composed of pennies flipped by humans have no capacity to produce meaningful communications, to say nothing of groundbreaking medical advances. The penny here is tapping into a source of information outside itself.

Nor does it matter if no chain of physical causation involving matter and energy can be found, or even exists, to account for the information outputted by the “magic” penny. The design in the “magic” penny’s output is clear. In particular, naturalistic assumptions that try to deny external informational input to the penny for lack of known physical processes capable of accounting for the information need to be rejected. Naturalism, whether in its methodological or metaphysical guise, is not a valid constraint on our new definition of intelligent design.

In conclusion, not everything is designed, but everything could ultimately be the result of intelligence. Both these claims are true. Previous definitions of intelligent design, however, have made it difficult to maintain both these claims without contradiction or confusion. The new definition of intelligent design given in this article allows both these claims to be maintained while at the same time fostering a robust understanding of intelligent design that is scientifically fruitful and philosophically defensible. 

Acknowledgments

The immediate impetus for this article was an unpublished typescript that Jay Richards circulated among his ID colleagues. It was titled “Why We Should Not Concede ‘Blind and Undirected Natural Processes’.” It suggested that the contrast class to design in a design inference should not be regarded as ateleological (i.e., as blind and undirected causes). Jay’s point was that allowing ateleological causes conceded too much ground to naturalists and too little to Aristotelians and Thomists, the latter then being left with a conception of intelligent design incompatible with their metaphysics, and thus with a compelling reason to reject intelligent design. 

I’ve long been aware of this concern. More than twenty years ago, I had even made a partial attempt to render intelligent design compatible with the Aristotelian-Thomist tradition. This I did in December 2001 when I gave an address at a AAAS meeting at Haverford College. My talk was titled “ID as a Theory of Technological Evolution,” and its opening line read “In Book II of the Physics Aristotle remarks, ‘If the ship-building art were in the wood, it would produce the same results by nature’.” A few years later I developed this Aristotelian approach to ID further in a book chapter titled “An Information-Theoretic Design Argument,” which appeared in the Beckwith et al. anthology, To Everyone an Answer: A Case for the Christian Worldview (IVP, 2004). The present article drew heavily from that chapter. 

Spurred by Jay’s typescript and aware that I had, though only partially, addressed his concerns in the past, I might have let the weeks and months slip away before taking up his concerns in earnest. But at the same time that I received Jay’s typescript, I was on my way to São Paulo for the big annual Brazilian intelligent design conference (June 28-30, 2024 — thank you Marcos Eberlin for the invitation!). I wanted to have something new to share with my Brazilian ID colleagues, so I decided to revisit the definition of intelligent design and see how I would need to adjust it to accommodate an Aristotelian-Thomistic metaphysics in which all causality is ultimately teleological. 

After some reflection, it became clear to me that such an accommodation could readily be accomplished while preserving everything of importance in intelligent design. More so, the new definition seemed to strengthen both the scientific and the philosophical underpinnings of intelligent design. I shared a “beta version” of that new definition at the Brazilian ID conference. The present article is the more mature fruit of my reflection. It draws on my two-decades old work on relating ID and Aristotle. In section 5, it also rehearses recent work of mine relating intelligence and information. Regardless of whether this new definition is the last word on defining intelligent design, in my view it represents a significant advance in clarifying intelligent design and strengthening its hand in scientific and philosophical discussions.

Tuesday 16 July 2024

More on secular occultism.

 

Neanderthals receive their rightful inheritance?

Neanderthals Were a Lot More Like Humans than We Realize


News stories at Phys.org and IFLS are reporting on a new paper in Science which finds more evidence of human-Neanderthal interbreeding. The Editor’s summary of the technical paper notes, “there is now ample evidence for gene flow from Neanderthals to humans and vice versa.” Under the standard biological definition of a “species,” such evidence of interbreeding would indicate that humans and Neanderthals should in fact be considered members of the same species. 

But there’s a lot of baggage in the way of that view. Mainstream paleoanthropologists would not say that we are directly descended from Neanderthals, but they are often promoted as an evolutionary relic representing a primitive stage of human evolution. As a 2020 paper in History and Philosophy of the Life Sciences says: “To most researchers however, the Neanderthal represented an ancient, inferior race of Homo sapiens, an extension into the past of the hierarchy of living human ‘races’, descending from civilized to savages.” But this view is increasingly countered by mainstream scientists who are saying that Neanderthals were just as advanced as contemporary humans. 
             
Most Unkind to Neanderthals

A few months back, Smithsonian Magazine published an article about Neanderthals noting that “we haven’t been very kind to Neanderthals since their remains were first unearthed in the 19th century, often characterizing them as lumbering dimwits or worse.” Yet there is evidence that they used creativity and symbolism
                            Hundreds of intentionally broken stalagmites were found there, arranged into two large, ellipsoid structures and several smaller stacks, during a time when — as researchers confirmed in 2016 — only Neanderthals were roaming Europe. No one knows what these structures were for, but they suggest a tendency toward creativity and perhaps even symbolism

“Rethinking Neanderthals”

A 2023 article titled “Rethinking Neandertals” in Annual Review of Anthropology, previously reviewed by Günter Bechly, notes that Neanderthals used symbolism much like modern humans

The use of symbols is often argued to be a defining feature of H. sapiens. Growing evidence, however, supports the use of symbols by Neandertals in the form of personal ornaments, portable art, and spoken language (see the section titled Language, Cognition, and Brain Development) and possibly cave painting, although the latter remains somewhat controversial. 

According to the paper, there are multiple potential examples of Neanderthals creating cave art:

The paper reports, “New research suggests that Neandertals were responsible for some hand-stencils, painted lines, and dots in multiple caves in Spain.” The cave art is controversial because some argue the dates are too young to be the work of Neanderthals.
At the French cave site La Roche-Cotard, there are 57,000-year-old “digital tracings” or “finger flutings” associated with Neanderthals. However, “The meaning of these tracings currently remains ambiguous … they are not necessarily symbolic in nature.”
At the Einhornhöhle site in Germany there is an engraved phalanx bone of a giant deer that dates to 51,000 years.
A “hashtag” like symbol from Gorham’s cave in Gibraltar, Spain.
A “pecked pebble from the Axlor Rockshelter in Spain.”
It is also argued that there are Neanderthal adornments, including a necklace made of eagle talons found in Croatia, “personal ornaments in the form of perforated, painted, and unpainted large marine bivalves,” and other possible examples of adornments. The Smithsonian Magazine article notes that Neanderthals may have used rope, red ochre pigments, and feathers in adornments.

Peeters and Zwart (2020) summarize this evidence:

[A] mounting body of evidence continues to expand the known repertoire of sophisticated strategies and symbolism practiced by Neanderthals, and sapiens-centrism has come under pressure. The more data we gather on their behaviour, the more similar Neanderthals seem to be to the modern human pattern. Not only dental hygiene, also large-scale cooperative hunting, complex stone tools, language, planning, care for the ill, imagination and symbolic behaviour, was present in Neanderthals. The only traceable advantage of Homo sapiens was that they had started to produce ornaments with little beads and shells, something which seemed absent in Neanderthal culture. Recent research, however, yielded perforated and ochre marine shells and colorants attributed to Neanderthals, suggesting once again that they were cognitively indistinguishable from modern humans. [Internal citations removed.]

Intelligence and Culture 

Going back to the Smithsonian article, it quotes researchers remarking on just how much our understanding of Neanderthal intelligence and culture has changed in recent years:

In addition, the many freshly unearthed or newly analyzed artifacts, some now confidently assigned to Neanderthals thanks to improved methods for dating archaeological finds, make for quite a collection. “If you’d have asked me 20 years ago, I would have said there was quite a big gap in behavior, and Neanderthals would have lacked many of the complex behaviors we find in Homo sapiens,” Stringer says. “Now that gap has narrowed considerably.”

Of course there’s still much we don’t know and the evidence is sparse — due in part to the fact that Neanderthals probably had a relatively small overall population size. But as time goes on, the “gap” between humans and Neanderthals seems to be narrowing, not expanding — and this trend line has profound implications for whether Neanderthals really were the primitive brutes they’re often portrayed as.

Primeval tech as the foundation of human technology

 

Friday 12 July 2024

Blind cavefish Darwinian evolution or adaptive devolution?

 Blind Cavefish: Evolutionary Icon, or an Example of Preprogrammed Adaptation?


The blind cavefish, Astyanax mexicanus, is often cited as a textbook example of Darwinian evolution. The dramatic transformation from a pigmented surface fish with eyes to a nonpigmented cave-dwelling fish with no eyes is presented as strong evidence for unguided evolution. But scientists with a different perspective have started studying this fish to see if the predictions of Darwinian theory actually hold true. Let’s look at some recent research.

Standard Darwinian theory makes the following claims:

Random mutations changed the fishes’ pigmentation.
Random mutations deactivated the fishes’ eyes.
These changes happened over a long period of time.
Fish without eyes and pigment had a reproductive advantage in the cave environment.

Continuous Environmental Tracking 

However, there is another model that could explain the transformations of the cavefish. This model is called continuous environmental tracking (CET) and it is design-based. The model presupposes that organisms actively track conditions within specific environments and self-adjust based on predesigned adaptation trajectories. Similar to human engineered agile systems, organisms can make internal changes within a range in response to external changes. This model posits:

Genetic changes are directed and repeatable, not random.
Adaptation may be based on epigenetic or gene expression changes.
Adaptation is rapid since it is programmed and not dependent upon the accumulation of random changes.
A sensory mechanism exists to determine when the fish should undergo these changes.
According to CET, adaptive outcomes are expected to be highly regulated, rapid, repeatable, and, in some cases, reversible. Adaptations are anticipated to encompass a spectrum ranging from physiological changes that happen to an individual within a single lifetime to generational changes, which happen more slowly across multiple generations. This model views environmental changes as triggers for organismal sensing rather than as a selective agent.

Similar Adaptations for Different Cave-Dwelling Animals 

Organisms that live in caves are called troglobites (different from the more familiar troglodyte, a human who lives in a cave). They commonly exhibit similar traits to the blind cavefish, including: loss of pigmentation, reduced or absent eyes, enhanced non-visual senses, slower metabolism, specialized reproductive strategies, and extended longevity. This suggests these traits are a non-random, purposefully designed response to the cave environment.

Recent Research on Cavefish

First, cavefish populations are now known “to exhibit repeated, independent evolution for a variety of traits including eye degeneration, pigment loss, increased size and number of taste buds and mechanosensory organs, and shifts in many behavioral traits.” (McGaugh et al. 2014) Are these repeatable, parallel changes an incredible display of natural selection acting on random mutation in the same ways over and over again — i.e., “convergent evolution”? Or is something else going on? Observations of repeatable changes in traits in independent populations are more consistent with a model of preprogrammed adaptability, whereby a designer frontloads genetic variability for different environments at the population level. 

One hypothesis is that there exists distributed information within the population whereby different individuals represent different optimizations for unique environments. A while back, I covered morphological changes in guppies, where more research has been done than with the cavefish. For the guppy, distributed information (i.e., variation — though not necessarily generated by random mutation) at the population level seems to be the favored current hypothesis. Guppy traits change based on baked in genetic variation, which allows changes along certain predesigned trajectories in different environments. Importantly, because the novelty generation is not “happening before our eyes,” I don’t find this evidence convincing for neo-Darwinian theory, which requires that variation arise due to random processes.

Second, developing vision turns out to be a very metabolically expensive process, accounting for as much as 15 percent of the resting metabolic rate early in development. Thus, loss of the visual system reduces the amount of energy necessary for development, which is important in the nutrient restricted cave environment. (Moran, Softley, and Warrant 2015) This means there is a purposeful reason why cavefish lose their eyes. Loss of eyesight appears to be a necessary trade-off given the extreme nutrient deprivation of the cave environment. 

Third, an individual non-pigmented, eyeless cavefish was shown to nearly revert to a pigmented state after exposure to surface-like conditions (daily cycles of high-intensity, full-spectrum light for five months). (Boyle et al. 2023) This suggests color changes are probably not genetic, but more likely epigenetic and can happen to an adult fish over a period of five months. 

Future Research Directions

The next step for researchers is to investigate the molecular mechanisms underlying these changes. Some of this work is already underway. Researchers used an approach called QTL mapping, where homozygous individuals for the trait of interest are crossed to produce an F1 generation. The F1 generation is then interbred or backcrossed to create the F2 or further generations. These generations have a combination of genetic material from the parents. The phenotypic information from these generations is logged and correlated with the genotypic information. This allows researchers to observe which regions of the genome segregate with the traits in question. When this was done for the guppy eye loss trait, QTL mapping implicated 2,408 genes out of a total of 23,042 genes. (McGaugh et al. 2014) Using some other techniques, the researchers lowered their gene list to 30 genes involved in these changes. But, even at 30 genes, it’s hard to imagine how random accumulation of 30 different mutations enabled this phenotypic change in multiple independent populations. Indeed, when many coordinated allele changes are observed as necessary for the development of a phenotype, this data is more consistent with movement of an organism along a predesigned trajectory of adaptation where, given the situation, the fish trades off certain things, like eyes, to function better in its new environment.

Conclusion

Recent research on the blind cavefish has shown that their transformations are reproducible, challenging the idea of random mutation accumulation. Other research has also revealed a significant functional reason for eye loss in cavefish. Additionally, studies have demonstrated that these transformations can occur much faster than previously believed. For example, a single cavefish was observed to reverse pigmentation within five months when exposed to daylight cycles. Recent QTL mapping has identified at least 30 genes involved in eye loss, which means transformations involve multiple genes, suggesting coordination. 

While much work remains to be done, the current research trajectory aligns more with a design-based CET model. A deeper understanding of these processes will provide insights into whether the adaptations observed in the blind cavefish result from Darwinian evolution or preprogrammed adaptive responses. The current evidence, though, is highly suggestive.

References

Boyle, Michael J., Brian Thomas, Jeffery P. Tomkins, and Randy J. Guliuzza. 2023. “Testing the Cavefish Model: An Organism-Focused Theory of Biological Design.” Proceedings of the International Conference on Creationism 9 (1): 17.
McGaugh, Suzanne E., Joshua B. Gross, Bronwen Aken, Maryline Blin, Richard Borowsky, Domitille Chalopin, Hélène Hinaux, et al. 2014. “The Cavefish Genome Reveals Candidate Genes for Eye Loss.” Nature Communications 5 (October): 5307.
Moran, Damian, Rowan Softley, and Eric J. Warrant. 2015. “The Energetic Cost of Vision and the Evolution of Eyeless Mexican Cavefish.” Science Advances 1 (8): e1500363.

Why the big deal re:origins