Search This Blog

Wednesday, 20 November 2024

On mathematics antiDarwinian bias.

 Protein Designers Explore Sequence Space


The twenty major amino acids used in life as we know it can be assembled in countless ways. What portion of that vast sequence space is functional? This question has had a long history among Darwin skeptics because the answer contributes to probability calculations for assessing the explanatory power of chance vs design for the origin of life.

Historical Background

The Wistar Institute symposium in 1966 has often been cited by ID advocates as a death knell for hopes that functional proteins would spontaneously arise by chance. Around this time in the late 1960s, about a decade after Francis Crick had proposed his famous “sequence hypothesis” for DNA and proteins, my father James F. Coppedge recognized the informational character of biomolecules. Working on a graduate degree in chemistry at UCLA, he attempted to estimate the “usable” portion of sequence space by analogy with useful combinations of letters in English words and sentences. He tested the analogy by searching through tens of thousands of random letters. In his 1973 book,1 he applied his rough estimate of useful text strings arising by random selections to argue for the extreme improbability of arriving at a single usable protein by chance, even granting a world-sized primordial soup of plentiful amino acids combining under ideal conditions at fantastically rapid rates. 

In 1984, Thaxton, Bradley and Olsen in their book The Mystery of Life’s Origin (updated in 2020), wrote about the formidable challenge of overcoming “configurational entropy” in sequence space. Douglas Axe, in his book Undeniable (2016), wrote about biochemistry experiments he performed to determine the limits of functionality by seeing how far a well-studied protein could be altered and still perform. His calculations, along with my father’s memorable “amoeba analogy” from his book (ch. 7) led to an episode in the Illustra Media film Origin (excerpted in their shorter video First Life).

William Dembski and Stephen Meyer have also discussed at length the informational nature of protein sequences and the probabilistic resources for accounting for them by chance in their books.2 Studies like these have all agreed that functional proteins occupy an infinitesimal fraction of sequence space, like a vanishingly small box in the corner of a sheet of graph paper.

The New Explorers

The arrival of AI tools such as AlphaFold that can predict protein folds for computer-generated polypeptides has opened up new ways to explore functional portions of sequence space outside of biology.3 In a fascinating News Feature in Nature on October 15, Ewen Callaway told about international contests to find new proteins. Promises of lucrative prizes are motivating explorers from around the world to join “protein-design competitions [that] aim to sift out the functional from the fantastical.” Notice the key word design:

Contests have driven key scientific advances in the past, particularly for the field of protein-structure prediction. This latest crop of competitions is drawing people from around the world into the related field of protein design by lowering the barrier to entry. It could also quicken the pace of validation and standards development and perhaps help to foster community. “It will push the field forward and test methods more quickly,” says Noelia Ferruz Capapey, a computational biologist at the Centre for Genomic Regulation in Barcelona, Spain.

The tournaments bypass the stodgy method of grant application, peer review and publication, speeding discovery and stimulating involvement. Callaway describes half a dozen competitions generating tens of thousands of candidate sequences, even from “people with no professional experience in biology” using their gaming computers at home.

Englert says that the high-quality entries from people who aren’t established researchers reminds him of the garage-tinkering origins of Apple, Microsoft and other tech giants. “It would have taken them two years of studying and joining a lab to get to the point where they can get started. Here they can do it over a weekend.” He imagines a future in which freelance protein designers vie for bounties set by companies, academic labs and others seeking a custom molecule.

Is This Evolution?

These contests are goal-directed with specific criteria, such as “looking for proteins capable of attaching to a growth hormone receptor called EGFR that is overactive in many cancers.” Another contest “tasked entrants with re-engineering an existing protein — a plant-virus enzyme used widely in protein purification — to make the molecule more efficient.”

Efforts at this kind of “directed evolution” have been around for a long time in labs. As Dembski explains in No Free Lunch and The Design Inference 2nd Ed, these “evolutionary algorithms” are not random searches comparable to natural selection, which must survive at each mutation, but intelligently guided, goal-directed projects. In the contests described by Callaway, success for the contestants is judged by a sequence’s match to a foreordained goal: it must fold, and it must bind to a specified molecule. A contestant may attempt random searches in sequence space but has the intelligence to determine whether a sequence meets the criteria.4 Even if the contestant does not know in advance what approach will be successful, he or she can perform an intelligently guided “search for a search” as if looking through a pile of treasure maps to identify which is best for locating a treasure.

It is misleading, therefore, to call a contest “Evolved 2024” or to name a new AI biology startup “EvolutionaryScale.” These have nothing to do with Darwinian evolution. This type of equivocation confuses the public. It resembles Darwin’s own blunder in comparing natural selection to artificial selection, a fallacy he continued all his life.5 

Intelligence Far Surpasses the Reach of Chance

The capabilities of intelligence over chance are profound. My father calculated that on average it would take chance 1,500 years (“If a person could draw and record one coin every five seconds day and night”) to arrange coins numbered one to ten in order—something an eight-year-old child could do in a few moments (p. 51). From there, he calculated how long it would take to expect success by chance at arranging the phrase “The Theory of Evolution” from a set containing lower- and uppercase letters and a space. The probability was 1 in 4.5 x 1039. Envisioning a machine attempting this project that could perform a billion draws per second at the speed of light, he concluded that the time required to expect one success would be 28 trillion times the assumed age of the earth. Then he compared it to the capabilities of a child:

So chance requires twenty-eight trillion times the age of the earth to write merely the phrase: “The Theory of Evolution,” drawing from a set of small letters and capitals as described, drawing at the speed of light, a billion draws per second! Only once in that time could the letters be expected in proper order.

Again, a child can do this, using sight and intelligence, in a few minutes at most. Mind makes the difference in the two methods. Chance really “doesn’t have a chance” when compared with the intelligent purpose of even a child. 

If chance had to rely on earthquakes and wind to do the job, it would never happen.6 

While we can hope for revolutionary insights from the contests to find new proteins, they will come about by intelligent design, not by evolution.

Notes

Coppedge, Dr James F, Evolution: Possible or Impossible? (Zondervan, 1973). This book was one of the few pre-ID Movement publications to use the phrase “intelligent design.” After eight printings, his popular and influential book went out of print but he self-published it through 2002. I have the remaining stock of copies for those interested. A digitized version is available at this link: http://crev.epoi 
Dembski, The Design Revolution (2004), ch 9-10; The Design Inference (2nd ed., 2023). Meyer, Signature in the Cell (2009), ch 8-10.
To make exploration of sequence space somewhat tractable, one must assume using only the canonical amino acids and assume they were already left-handed and join solely with peptide bonds at the proper linkages. Chance, of course, wouldn’t care about those details. 
Success depends on context. One of the longest meaningful alphabet sequences my father detected was “AGMCAP”—an imaginative stretch, but potentially useful in some contexts (p. 104). Protein sequences are even more demanding since they must fold and perform a useful function in three dimensions within a cell.
Robert Shedinger, Darwin’s Bluff (2024), p.71-78, 171-172, 199-200.
This is not an exaggerated claim. Dr. A. E. Wilder-Smith debunked the old Huxley analogy of a million monkeys typing Shakespeare given enough time with the observation that biochemical reactions are reversible. The monkey-typewriter analogy depends on assuming that the letters stay on the page. If they fall off soon after they are typed, a Shakespeare sonnet will never emerge. In biochemistry, peptide bonds fall apart in water. A growing random chain, therefore, would not survive for long in the best of real-world conditions, nor would any progress in the meaningful alphabet string survive the next quake or gust of wind.


Sunday, 17 November 2024

Your tax dollars at work.

 

On the designed intelligence of the fruit fly.

 Design, Engineering, Specified Complexity: Appreciating the Fruit Fly Brain


Groundbreaking new research has documented the complexity and design of the brains of fruit flies (Drosphila melanogaster). Many of the results were published in a series of papers in the journal Nature. The basis for the research is the completion of the entire wiring diagram (called a connectome) of the fruit fly brain, which consists of 140,000 neurons.1 In addition, it includes more than 50 million connections (chemical synapses).2Keep in mind that, despite the number of neurons and connections, fruit fly brains are tiny, smaller than a poppy seed. Previously, researchers had mapped the brains of a few other organisms, including the roundworm C. elegans, however their brains consist of only 302 neurons. 

Most of the work was conducted by a group of researchers called the FlyWire consortium. The completion of the project and ongoing research is expected to result in a revolution in neuroscience. Previously it was believed that brains with hundreds of thousands of neurons were too large to map and assess function in much detail. But the results are a first step toward being able to do so, and potentially toward mapping at least segments of larger brains (including humans with more than 80 billion neurons and 100 trillion connections). The research has already revealed a number of important, and in some cases, surprising findings. 

Neuron Types

The research has identified at least 8,453 neuronal cell types.3 A neuron cell type is a group that has similar morphology and connectivity. This compares with the worm C. elegans which has 118 cell types.4 The research also identified different classes of neurons, depending upon their function. Examples include sensory neurons (labeled afferent) that send signals from sensory organs to the brain. Motor and endocrine neurons (labeled efferent) send signals from the brain to muscles and other organs.5

Previously, some theorized that brain neurons might be like “snowflakes,” that is, each one is unique. That would imply their development and connections are essentially a random process. However, the research confirms that is generally not the case. There is some evidence of randomness, as one analysis shows that, “Over 50% of the connectome graph is a snowflake. Of course, these non-reproducible edges [connections] are mostly weak.”6 The analysis does show that, “Neurons occasionally do something unexpected (take a different route or make an extra branch on one side of the brain). We hypothesize that such stochastic differences are unnoticed variability present in most brains…In conclusion, we have not collected a snowflake.”7 This means that the stronger connections are largely stereotyped and do not vary in a random manner significantly. Conversely, the findings show convincingly that neither is the brain structure a regular lattice type, as in crystals.

Complexity

Fruit flies exhibit a number of complex behaviors, including flight control (hovering, rapid changes in direction), navigation, mating courtship using pheromones, and swarming. Therefore, it isn’t that surprising that their brains show complexity. The average fruit fly neuron connection consists of 12.6 synapses.8 Individual neurons typically have less than 10 connections, but some have more than 100, and even a few have 1,000.9This means that there isn’t a uniform distribution of neurons or a uniform distribution of connections. The research has even been able to map the flow of information throughout the brain. The fruit fly brain consists of areas of specialized functions. These include visual processing, olfactory, auditory, mechanical sensors, and temperature sensors. A further indication of specialized functions is the report of one research project that analyzed 78 anatomically distinct “subnetworks” in the brain.10 This same analysis concluded, “The local structure of the brain displays a high degree of non-randomness, consistent with previous studies in C. elegans and in the mouse cortex.”11

The overall structure of the brain is consistent among fruit flies, based on the finding of “[a] high degree of stereotypy at every level; neuron counts are highly consistent between brains, as are connections above a certain weight.”12 This is consistent with previous research with different insect brains.13

Another finding from the research is that the fruit fly brain exhibits the characteristics of is what is called a “small-world network,” where the “nodes are highly clustered and path lengths are short.”14 Other examples of small-world networks are power grids, train routes, and electronic circuits. The brain of C. elegans was the first example identified of a small-world neural network. Characteristics of small-world networks include “enhanced signal-propagation, computational power, and synchronizability.”15 The key benefit for brain function is that it provides “highly effective global communication among neurons.”16

Overall, the research shows that the fruit fly brain has a high degree of complexity, but more importantly, much of it is specified complexity. This includes the engineering design of the various specialized neural networks and subnetworks. Some of the engineering design principles that are evident in aspects of the brain include optimization, efficiency, and coherence. As complex as the brain is shown to be so far from the research, it is likely even more complex than currently appears to be the case since the electrical connections have yet to be fully mapped in a similar way to the chemical connections

Saturday, 16 November 2024

Rallying to the logic of design.

 Postcard from Venice: First Pan-European Conference on Intelligent Design


Recently I had the great privilege and honor to attend a remarkable event in the beautiful and historic city of Venice, Italy. It was the first pan-European conference on intelligent design theory, organized by the Centro Italiano Intelligent Design (CIID), in collaboration with the foundation En Arche (Poland), BioCosmos (Norway), Centre for Intelligent Design (UK), Zentrum für BioKomplexität & NaturTeleologie (Austria), and Discovery Institute (USA). The conference was titled “Cosmos, Life, Intelligence, Information” and it was held at the prestigious and absolutely stunning venue of the Ateneo Veneto, which represents the oldest cultural institute still operative in Venice. The institute is dedicated to the spreading of science, education, and art and was officially founded in 1812, but originally dates back as far as 1458. It is situated in the historic center of Venice in a building from the early 1500s. The event was not advertised in advance and only included about 60 invited guests, to avoid any possible intervention by the Darwinist thought police, whose zealous activists already had prevented several such conferences at prestigious venues in the past.

The speakers came from all over Europe and America and addressed very different topics related to the question of intelligent design. After an introduction by the president of CIID, Carlo Alberto Cossano, the German physicist Professor Alfred Krabbe talked about “Fine-tuning in the universe,” which surprised me with some striking examples of fine-tuning in physics and astronomy that I had never heard of before. Professor Ferdinando Catalano elaborated on the strange relation between mathematics and physics in his talk “But does light ‘reflect’?”, and his Italian compatriot Professor Alessandro Giorgetti emphasized the extreme unlikelihood of the emergence of life from inanimate matter in his lecture about the “Origins of life and exobiology.”

Discontinuities in the Fossil Record

Next, I presented a talk about the “Scientific Challenges to Neo-Darwinism,” based on the discontinuities in the fossil record, the waiting time problem, the species pair challenge, and the incongruence of different lines of evidence in phylogenetics and molecular clock studies. Professor Steinar Thorvaldsen, an information scientist from Norway, talked about “Measuring the information in genes and DNA,” and Polish biologist Professor Stanisław Karpiński asked “Is the theory of evolution coherent or fragmentary?”, presenting fascinating new discoveries about communication and information processing in plants. British physician Dr. David Galloway introduced “The engineering of oxygen delivery in the newborn human” as another case of irreducibly complex systems. Last but not least, Dr. Casey Luskin from Discovery Institute gave an “Update on avenues of ID inspired research,” which showed the remarkable progress of intelligent design in the past years.

A Concluding Debate

The event concluded with a panel debate between theistic evolutionist Dr. Erkii Vesa Rope Kojonen (Finland) and ID proponent Casey Luskin about the compatibility of evolution and design. Both speakers are Christian theists, who agree that there is evidence for design in nature that cannot be sufficiently explained by blind forces of chance and necessity, but they differ in their views as to how and when the input of intelligent design happened. Rope Kojonen thinks that it was only at the very beginning of the universe, through a fine-tuning of the laws of nature and the initial conditions, while the development of life happened by mere Darwinian processes in this fine-tuned fitness landscape. On the other hand, Casey Luskin made a strong case for the necessity of ongoing activity of an intelligent designer during the history of life to explain complex adaptations and new proteins. While Rope Kojonen relied more on philosophical and theological arguments, Casey Luskin focused on the empirical scientific evidence and an inference to the best explanation, which in his and my humble opinion clearly favors intelligent design theory over theistic evolution. Nevertheless, it was very encouraging to see how such an exchange of different views can happen in a very respectful, charitable, and kind manner, very much unlike the aggressive attitude of many vocal ID critics on the Internet. After a discussion and Q&A session, the event ended with a wonderful dinner in an inspiring atmosphere of camaraderie and friendship.


All the talks were professionally recorded and will be made available on YouTube soon, and there are plans to publish English abstracts of the talks.

CIID should be congratulated for the excellent organization of this conference, which I hope will mark the beginning of more regular events like this in Europe to foster interdisciplinary exchange and advance the field of intelligent design research.

Quantity is trumping quality in science?

 

He shot at the king and didn't miss.

 

Friday, 15 November 2024

On the preservation of natural history.

 Fossil Friday: New Research on How Delicate Soft-Bodied Organisms Can Be Perfectly Preserved


This Fossil Friday features the Cambrian arthropod Waptia fieldensis from the famous Burgess Shale. However, today we will not look into a particular fossil or group of organisms, but into the exceptional mode of fossil preservation of some of the oldest known animals from the Cambrian and the recently changed interpretation of how these fossil layers were formed. Paleontologists have generally assumed and postulated that perfect and complete preservation, especially of delicate soft-bodied organisms, suggests a gentle deposition in situ without significant transport that would certainly damage these fragile bodies. This view has been challenged by experimental studies that showed such organisms can remain entirely intact even when transported more than 20 km in turbulent sediment flows (Bath Enright et al. 2017). But how does this apply to real world fossil localities, especially the crucial sources for exquisitely preserved fossils of the first animals from the Cambrian Explosion? Two new studies have revised my views on two key localities, i.e., the Burgess Shale and the Emu Bay Shale.

Burgess Shale

The Burgess Shale, a world-renowned fossil site in the Canadian Rockies, provides one of the most complete windows into the Cambrian Explosion, a period about 508 million years ago when a remarkable diversity of complex life forms first appeared in the fossil record. Discovered in 1909 by paleontologist Charles Doolittle Walcott, the Burgess Shale is exceptional not only for its abundance of fossils but also for the extraordinary preservation of soft-bodied organisms, which are typically absent from the fossil record. This preservation includes fine details of tissues and appendages, capturing intricate anatomical features that illuminate the early history of animals. Scientific explanations for this unique preservation focus on taphonomy, the processes that affected these organisms from death to their fossilization, emphasizing the role of rapid burial and anoxic conditions.

According to the prevailing taphonomic model, the organisms in the Burgess Shale were buried quickly by underwater mudslides or turbidites, which were common in the deep marine environments where these creatures lived. These mudslides would have buried the organisms in a fine-grained, clay-rich matrix, isolating them from scavengers and decay. Furthermore, the water column above the burial site was likely low in oxygen, creating anoxic or dysoxic conditions that inhibited bacterial decomposition. This lack of oxygen, combined with rapid burial, allowed the soft tissues of these animals to be preserved in exquisite detail. Over time, mineral replacement of organic materials took place, particularly through carbon films that retained fine anatomical features. In some cases, other mineral replacements occurred, stabilizing the structures long enough for them to fossilize.

Further research emphasized the precise geochemical and sedimentological conditions that allowed for this unique preservation. Studies on clay mineralogy and trace metal concentrations in the Burgess Shale suggested that specific chemical interactions in the sediment helped to inhibit decay, possibly by creating an environment toxic to decay microbes. As a result, the Burgess Shale represents not only a key snapshot of Cambrian life but also an extraordinary example of the role that taphonomic processes play in determining what we see in the fossil record.

Thus, even the traditional view considered the Burgess Shale fossil assemblage as caused by catastrophic rapid burial. However, according to Bath Enright et al. (2017), “the exceptional preservation of organisms within the deposits has been used to argue that transport of these animals must have been minimal,” which those authors doubted based on their experiments. In a more recent follow-up study (Bath Enright et al. 2021), the same authors tested this with flume experiments to create analog flows and showed that transport of polychaete worms over tens of kilometers does not induce significant damage. They concluded “that the organisms of the Burgess Shale in the classic Walcott Quarry locality could have undergone substantial transport and may represent a conflation of more than one community.” Co-author Dr. Nic Minter commented in the press release by the University of Portsmouth (2021) that “this finding might surprise scientists or lead to them striking a more cautionary tone in how they interpret early marine ecosystems from half a billion years ago.” It goes without saying that this result of course also has important implications for our understanding of the over 40 known localities of the Burgess-Shale-Type (BST) preservation.

Emu Bay Shale

Such another BST locality is the Emu Bay Shale, located on Kangaroo Island in South Australia. It represents one of the most significant Cambrian fossil sites outside North America, providing valuable insights into the Cambrian Explosion, especially regarding arthropod diversity. Like the Burgess Shale, the Emu Bay Shale is remarkable for its exceptional preservation of soft tissues in fossils, including eyes, digestive tracts, and delicate appendages, which offer a detailed view of early animal anatomy. Dating to around 514 million years ago, it preserves a diverse array of Cambrian life forms, particularly trilobites and anomalocaridids, which are preserved with high fidelity, showing fine structures not typically fossilized.

Scientific views on the taphonomy of the Emu Bay Shale attributed its preservation quality to rapid burial and the local environmental conditions. Similar to the Burgess Shale, researchers suggested that the fossils were entombed quickly in fine-grained sediment, likely during submarine mudflows that swept organisms into deeper, oxygen-poor waters. Anoxic conditions in the burial environment would have slowed bacterial decay and minimized disruption by scavengers, while fine sediment encasement shielded delicate structures from mechanical breakdown. This unique combination of rapid burial and anoxia, possibly supplemented by specific chemical interactions in the sediment, allowed the Emu Bay Shale to capture fine anatomical details, adding a vital piece to our understanding of Cambrian ecosystems.

According to a brand new study by Gaines et al. (2024), published in the journal Science Advances, the Emu Bay Shale has to be newly interpreted. The authors document evidence for downslope mass transport of soft-bodied organisms in “density-driven sediment gravity flows” caused by “episodic high-energy events.” The press release explains that the sediments were “were catastrophically deposited into the ocean by debris flows,” which is “not where you would expect to see delicate, soft-bodied creatures preserved” (Gaines quoted in NSF 2024). The authors concluded that most taxa of the more than 25,000 fossils were transported and thus not buried in situ, which explains why “before these findings, the research community debated whether the Emu Bay Shale represented a shallow or deep environment” (NSF 2024).

Perfect Fossil Preservation Does Not Exclude Long Transport 

What makes the revised understanding of the taphonomy of these two key Cambrian localities very interesting is that the perfect preservation of the fossil from these localities is now considered to be consistent with a long transport in rough and turbulent sediment flows. Of course, this does not just apply to the Burgess Shale and Emu Bay Shale localities but can be extrapolated to numerous other “Konservat-Lagerstätten” with well-preserved marine and terrestrial fossils around the globe, such as the Devonian Hunsrück Shale in Germany and the Cretaceous Jehol biota in China (Bath Enright et al. 2017). A good example is the new study by O’Connell et al. (2024) about the terminal Ediacaran Nama biota, which showed that soft-bodied and biomineralizing organisms were transported in sediment gravity flows induced by storms and others events. The authors found that “nearly all soft-bodied and biomineralizing organisms preserved in the studied units were transported prior to final burial.” The authors also mention that “other work suggests that turbulent and transitional flows can transport soft-bodied organisms great distances with little damage (ca 20 km; Bath Enright et al., 2017, 2021).” 

Evolution is Neither a Fact nor Knowledge

These new interpretations show how quickly yesterday’s scientific textbook wisdom may be refuted as obsolete misinterpretation. In the strict sense of the notion of “knowledge” we do not know anything with certainty about the distant past. All we have is an ever-changing set of very preliminary and often weakly supported conjectures, combined with wild speculations and fancy storytelling, that more often than not later turn out to have been plausible but false. The famous philosopher of science Karl Popper cherished this procedure of “conjectures and refutations” as the very core of the scientific method. However, there is a fundamental difference between repeatable and observable law-like processes that can be mathematically modelled and empirically tested, and singular events in the past that can only be probabilistically inferred based on circumstantial evidence and certain assumptions. Earth history, paleobiology, and evolutionary biology are all historical “soft” sciences that cannot be considered as on an equal footing with experimental “hard” sciences like physics, chemistry, genetics, or physiology. Only the latter sciences provide us with all the benefits of modern medicine and technology. The historical sciences are basically ivory tower musings of hardly any practical value and dubious scientific status. Therefore, I consider the famous dictum of evolutionary biologist Theodosius Dobzhansky — that “nothing in biology makes sense except in the light of evolution” — as one of the biggest myths and blunders in modern science. On the contrary, all the just-so-stories of macroevolution are completely dispensable in all of real (experimental) biology. I would even suggest that “not much in biology makes sense except in the light of design,” which is why design language is so ubiquitous and indispensable even in the mainstream biosciences.

References

Bath Enright OG, Minter NJ & Sumner EJ 2017. Palaeoecological implications of the preservation potential of soft-bodied organisms in sediment-density flows: testing turbulent waters. Royal Society Open Science 4(6), 170–212. DOI: https://doi.org/10.1098/rsos.170212
Bath Enright OG, Minter NJ, Sumner EJ, Mángano MG & Buatois LA 2021. Flume experiments reveal flows in the Burgess Shale can sample and transport organisms across substantial distances. Communications Earth & Environment 2: 104, 1–6. DOI: https://doi.org/10.1038/s43247-021-00176-w
Gaines RR, García-Bellido DC, Jago JB, Myrow PM & Paterson JR 2024. The Emu Bay Shale: A unique early Cambrian Lagerstätte from a tectonically active basin, Science Advances 10(30): eadp2650, 1–9. DOI: https://doi.org/10.1126/sciadv.adp2650
NSF (National Science Foundation) 2024. A remarkable fossil assemblage gets a new interpretation. Phys.org October 30, 2024. https://phys.org/news/2024-10-remarkable-fossil-assemblage.html
O’Connell B, McMahon WJ, Nduutepo A, Pokolo P, Mocke H, McMahon S, Boddy CE & Liu AG 2024. Transport of ‘Nama’-type biota in sediment gravity and combined flows: Implications for terminal Ediacaran palaeoecology. Sedimentology early view, 1–43. DOI: https://doi.org/10.1111/sed.13239
University of Portsmouth 2021. Fossil secret may shed light on the diversity of Earth’s first animals. Phys.org June 2, 2021. https://phys.org/news/2021-06-fossil-secret-diversity-earth-animals.html

Saturday, 9 November 2024

Toward a more balanced look at NRMs

 

Yet more secular mysticism?

 Fossil Friday: An Ediacaran Animal with a Question Mark


This Fossil Friday discusses Quaestio simpsonorum from the Late Precambrian of the Ediacaran biota in Australia, which is, well, actually I have no idea what it really is, and neither does anyone else, which makes its genus name very fitting indeed. Here is the backstory of these fossils that were discovered in the 555-million-year-old sandstones of Nilpena Ediacara National Park in the South Australian outback, and were reconstructed as inflated disc-shaped organisms that were floating over microbial mats on the ancient seafloor like a Roomba.

Just a few days ago the study by Evans et al. (2024) with the description of this fossil organism hit the news with sensationalist headlines like “Ancient ‘sea Roomba’ tells a 555-million-year-old story of our evolution” (Thompson 2024), or “Flinders fossil unlocks secrets of first animals on Earth” (Government of South Australia 2024), or “Florida State University scientist discovers one of the earth’s earliest animals in Australian outback” (Harris 2024), or “Enigmatic Fossil Shows Signs Of Being Earth’s First Animal” (Bressan 2024). It was boldly celebrated as “oldest evidence for complex, macroscopic animals” (de Lazaro 2024) and “the earliest moving animals” (Luntz 2024). Wow, that surely sounds like something important

Is It Really Based on Solid Evidence? 

A first look at the images of the fossil is not very encouraging: The fossils look like structureless blobs, and many fossil collectors might not even have bothered to pick them up. Certainly the actual study showed much more significant details? No, not at all which is a real bummer. Even co-author García-Bellido explicitly admitted to IFLScience “that all we really know about Quaestio is the shape of its outsides” (Luntz 2024). Yes, you heard that right. All we know about this fossil is the shape, which is nothing more than a few-inch-large round impression with a question-mark-like fold in the middle that originates from a kind of notch. Are any organs visible that suggest that it was a multicellular animal? No. Any bilateral symmetry? No, but this does not prevent the scientists from speculating that in spite of the external asymmetry, it might have been a pioneer bilaterian ancestor, because humans are bilaterian animals and internally asymmetrical (authors quoted in de Lazaro 2024). You can’t make this stuff up: They seriously compare a Precambrian blob of jello with a highly derived modern human and claim that external asymmetry in the former and internal asymmetry in the latter could somehow correspond, even though the internal asymmetry of humans does not belong to the ground plan of vertebrate animals even according to mainstream evolutionary biology. This is ridiculous junk science, based on almost useless fossil evidence. Actually, there are even inorganic pseudofossils like salt pseudomorphs that look quite similar to this stuff. All the elaborate hypotheses in the new study are based on the simple circumstance that the structures in the stone seem to show some polarity. Here is news: almost every organism does show some polarity including most protists and plants. This is much ado about nothing.

What about the alleged evidence for motility? Are there any trace fossils that really document active motility? No, but again the scientists claim otherwise. Why? Because a few of the fossils have a similar shaped and similar sized impression close to them, which they interpret as evidence for active movement. However, such structures had been already described under the name Epibaion for the Ediacaran dickinsoniids and are highly controversial in their interpretation as I discussed in a previous article (Bechly 2018). I highly recommend to read the paragraph on these alleged trace fossils in this latter article of mine. While some experts indeed interpreted those structures as grazing traces, others considered the serial impressions as made by dead organisms displaced by slow currents before finally being buried. I personally observed the latter phenomenon in fossil dragonflies from the Upper Jurassic Solnhofen limestone (see Tischlinger 2001). The alleged traces show no continuity and thus no evidence for motility. But who am I, or world leading experts like A. Yu Ivantsov (also see Brasier & Antcliffe 2008 and McIlroy et al. 2009), to disagree with some evolutionary biology graduate student’s views, who thinks that this is “a clear sign that the organism was motile” (Bressan 2024, Harris 2024)? What makes things worse is the whole house of cards of far-reaching hypotheses that are built on this dubious foundation. The authors for example speculate that “the presence of muscles and/or a nervous system based on inferred behaviors would, if verified, constitute further evidence of more advanced differentiation” (Evans et al. 2024). Problem is: they are not verified. There is not a shred of evidence for muscles or nervous systems in any of the fossils! There is not even valid evidence for the inferred behaviors from which the presence of muscles and nervous system was inferred. It is quite revealing for the poor state of evolutionary biology that such imaginative story-telling is not only allowed but apparently welcome in a peer-reviewed science journal titled Evolution & Development.

An “Animal” with a Question Mark

In short: There is neither any convincing evidence for a metazoan affinity of Quaestio, nor for its motility. It is truly an Ediacaran “animal” with a question mark! The much more obvious conclusion is that Quaestio is just another problematic organism of the Ediacaran biota that cannot be connected to any living group. Actually, the scientists themselves did not suggest a direct relationship with any living animals but rather compared Queastio with dickinsoniids, which are of highly questionable animal relationship themselves (Bechly 2018). Sure, Quaestio and dickinsoniids still could be placozoan or coelenterate grade animals, or xenacoelomorph flatworms, even though none of them agrees in size, shape, symmetry or anatomy, or any relevant diagnostic similarities. Thus, they could as well be giant protists (Vendobionta sensu Seilacher), or rather an independent extinct group of multicellular organisms, or almost anything else such as fungi or lichens. There are also similarities between Quaestio and the trilobozoan Ediacaran fossils like Tribrachidium that were initially misidentified as echinoderms, or to other circular Ediacaran fossils like Cyclomedusa (featured above) that were initially misidentified as jellyfish, but later reinterpreted as holdfasts or microbial colonies. We have no clue what all these Ediacaran biota organisms really were. To claim that such undefinable blobs in sandstone represent fossils of the oldest motile animals is massively overselling the evidence to say the least.

References

Bechly G 2018. Why Dickinsonia Was Most Probably Not an Ediacaran Animal. Evolution News September 27, 2018. https://evolutionnews.org/2018/09/why-dickinsonia-was-most-probably-not-an-ediacaran-animal/
Brasier M & Antcliffe J 2008. Dickinsonia from Ediacara: a new look at morphology and body construction. Palaeogeography, Palaeoclimatology, Palaeoecology 270, 311–323 DOI: https://doi.org/10.1016/j.palaeo.2008.07.018
Bressan D 2024. Enigmatic Fossil Shows Signs Of Being Earth’s First Animal. Forbes October 19, 2024. https://www.forbes.com/sites/davidbressan/2024/10/18/enigmatic-fossil-shows-first-signs-of-being-earths-first-animal/
de Lazaro E 2024. New Species of Complex Ediacaran Animal Discovered in Australia. SciNews October 17, 2024. https://www.sci.news/paleontology/quaestio-simpsonorum-13355.html
Evans SD, Hughes IV, Hughes EB, Dzaugis PW, Dzaugis MP, Gehling JG, García-Bellido DC & Droser ML 2024. A new motile animal with implications for the evolution of axial polarity from the Ediacaran of South Australia. Evolution & Development e12491, 1–11. DOI: https://doi.org/10.1111/ede.12491
Government of South Australia 2024. Flinders fossil unlocks secrets of first animals on Earth. Environment SA News October 14, 2024. https://www.environment.sa.gov.au/news-hub/news/articles/2024/10/flinders-fossil-unlocks-secrets-of-first-animals-on-earth
Harris M 2024. Florida State University scientist discovers one of the earth’s earliest animals in Australian outback. Florida State University October 14, 2024. https://news.fsu.edu/news/university-news/2024/10/14/florida-state-university-scientist-discovers-one-of-the-earths-earliest-animals-in-australian-outback/
Luntz S 2024. One Of The Earliest Moving Animals Had A Very Quizzical Shape. IFLScience October 22, 2024. https://www.iflscience.com/one-of-the-earliest-moving-animals-had-a-very-quizzical-shape-76460
McIlroy D, Brasier MD & Lang AS 2009. Smothering of microbial mats by macrobiota: implications for the Ediacara biota. Journal of the Geological Society 166, 1117–1121. DOI: https://doi.org/10.1144/0016-76492009-073
Thompson B 2024. Ancient ‘sea Roomba’ tells a 555-million-year-old story of our evolution. New Atlas October 14, 2024. https://newatlas.com/biology/fossil-quaestio-evolution/
Tischlinger H 2001. Bemerkungen zur Insekten-Taphonomie der Solnhofener Plattenkalke. Archaeopteryx 19, 29–44.

Saturday, 2 November 2024

More iconoclasm from the fossil record.

Fossil Friday: New Fossil Evidence Challenges Another Icon of Evolution


This Fossil Friday features the skull of Cynognathus crateronotus, a mammal-like reptile from the Middle Triassic of the southern hemisphere landmasses that had formed the ancient supercontinent Gondwana. It belongs to a group called cynodontians. The recent analysis of the jaw anatomy of fossil cynodonts from South America challenged some longstanding evolutionary ideas.

When evolutionists are asked what in their view represents the best evidence for the Darwinian story of common descent with modification, they will generally refer to the fossil record and especially to supposed transitional series like those of horses, elephants, whales, hominins, fishapods to tetrapods, dinos to birds, and most of all the transition from reptiles to mammals. The latter allegedly shows an unambiguous transformation of the jaw articulation from a primitive reptilian state to the derived mammalian condition, correlated with a reduction of bones and an incorporation of the original jaw articulation into the mammalian ear as auditory ossicles (Reichert-Gaupp theory).

A More Complicated Picture

However, a closer look at the actual fossil evidence shows a much more complicated picture that involves multiple independent origins of anatomical similarities. In a seminal study on the evolution of the mammalian middle ear, the authors admitted that “current hypotheses on the convergent evolution of middle ear bones are complex and controversial, partly because of a lack of phylogenetic resolution and partly because the interpretation of the fossil evidence is difficult” (Ramírez-Chaves et al. 2016). They concluded that “the departure of postdentary bones from the dentary to form a partial mammalian middle ear (PMME); … occurred convergently in the northern hemisphere ancestors of therians and the southern hemisphere ancestors of monotremes … the transition from a PMME to a definite mammalian middle ear (DMME) ocurred [sic] multiple times, including at least three cases of independent evolution within extant mammals (in monotremes, metatherians and eutherians).”

Now, a new study complicated this scenario even more: The scientists studied the well-preserved fossil remains of three key species of probainognathian cynodonts, viz. Brasilodon quadrangularis and Riograndia guaibensis from the Late Triassic of Brazil, as well as Oligokyphus major from the Early Jurassic of Great Britain. They used CT scanning to digitally reconstruct the jaw joint of these animals and found something very unexpected and surprising (Luo 2024). The jaw joint anatomy of the two Brazilian species was very different, with the joint of Riograndia being more mammal-like than that of Brasilodon, even though the later genus is considered as closer related to modern mammals. Furthermore, Riograndia was dated to be about 17 million years older than any other previously known mammal-like reptile with such an advanced jaw articulation. The authors concluded that “the dentary-squamosal contact, which is traditionally considered to be a typical mammalian feature, therefore evolved more than once and is more evolutionary labile than previously considered.”

Interesting News for a Departed Colleague

The press release unashamedly speaks about “rewriting our understanding of mammal evolution” (News Staff 2024), and elaborates that:

This indicates that the defining mammalian jaw feature evolved multiple times in different groups of cynodonts, earlier than expected. The findings suggest that mammalian ancestors experimented with different jaw functions, leading to the evolution of mammalian traits independently in various lineages. The early evolution of mammals, it turns out, was far more complex and varied than previously understood.

The lead author of the new study, Dr. James Rawson from the University of Bristol, said (quoted in News Staff 2024):

This indicates that the defining mammalian jaw feature evolved multiple times in different groups of cynodonts, earlier than expected. The findings suggest that mammalian ancestors experimented with different jaw functions, leading to the evolution of mammalian traits independently in various lineages. The early evolution of mammals, it turns out, was far more complex and varied than previously understood.

The lead author of the new study, Dr. James Rawson from the University of Bristol, said (quoted in News Staff 2024):

What these new Brazilian fossils have shown is that different cynodont groups were experimenting with various jaw joint types, and that some features once considered uniquely mammalian evolved numerous times in other lineages as well.

Dr. Zhe-Xi Luo, one of the world’s leading experts on mammalian origins and not involved in the new study, commented that this is “a jaw-dropping discovery about early mammals” (Luo 2024). It certainly is, and it definitely looks like we are witnessing the beginning of the dismantling of yet another icon of evolution, which would have been very interesting news to my recently deceased friend and colleague Jonathan Wells, who had described many such cases in his ground-breaking books.

References

News Staff 2024. New Cynodont Fossil Discoveries are Rewriting Our Understanding of Mammal Evolution. SciNews September 25, 2024. https://www.sci.news/paleontology/brazil-cynodonts-13286.html
Luo Z-X 2024. A jaw-dropping discovery about early mammals. Nature 634, 305–306. DOI: https://doi.org/10.1038/d41586-024-03038-5
Ramírez-Chaves HE, Weisbecker V, Wroe S et al. 2016. Resolving the evolution of the mammalian middle ear using Bayesian inference. Frontiers in Zoology 13: 39, 1–10. DOI: https://doi.org/10.1186/s12983-016-0171-z
Rawson JRG, Martinelli AG, Gill PG, Soares MB, Schultz CL & Rayfield EJ 2024. Brazilian fossils reveal homoplasy in the oldest mammalian jaw joint. Nature 634, 381–388. DOI: https://doi.org/10.1038/s41586-024-07971-3

Thursday, 31 October 2024

On reverse engineering JEHOVAH'S tech

 Studying Biology with System Engineering Principles


In the IEEE Open Journal of Systems Engineering, I recently co-authored a paper with Dr. Gerald Fudge at Texas A&M on the intersection of biology and engineering. Our paper does two things: 1) It lays out a methodology based on systems engineering for biologists. 2) It illustrates the usefulness of the methodology with a case study of glycolysis. 

The project was inspired a couple of years back when I read Uri Alon’s An Introduction to Systems Biology, which made me realize that biologists could benefit from the same engineering approaches used to build iPhones. These approaches could lead to uncovering the intricate designs in life. 

As a biologist, I’ve often wondered what the best way is to integrate engineering ideas in biology research. While there are many methods, one way engineering can assist the everyday biologist is in providing a robust methodology for approaching the study of living systems. A great illustration is the paper, “Can a Biologist Fix a Radio?” The punchline is that a handyman can fix a radio, but a biologist probably can’t — and this has nothing to do with IQ but everything to do with methodology. (Lazebnik 2002)

Current practice in biology does not involve a formal methodology for reverse engineering a system. Instead, biologists are taught the scientific method, which is very useful for rigorously testing hypotheses, along with a reductionistic bottom-up processes of interrogation. Different from these is a methodology that helps one understand and interrogate a complex system. Having identified this problem, Dr. Fudge, a long-time engineer, and I teamed up to work on integrating the proven systems engineering methodology to enhance discovery in living organisms.

Proven in What Way?

I used the word “proven” because systems engineering has built amazing technology, from rockets to iPhones. It has a track record of being able to develop complex systems. The standard systems engineering process goes something like this. Engineers meet with stakeholders and are given a rough outline of requirements (verbal or written details about what the product should do). This is eventually formalized into a set of specific requirements and then often modeled using a systems engineering tool. More specific models are then developed, from which a variety of refinements result. Then construction begins. Construction of the smaller parts happens first, followed by the assembly of subsystems. Throughout this build phase, testing is ongoing, and all is compared with the list of requirements and the initial systems model. Eventually a completed product is produced, meeting the stakeholders’ expectations. Or that is the goal, anyway.

Dr. Fudge and I adapted this methodology for biology. We call it model-based reverse systems engineering (MBRSE). “Model-based,” because it utilizes a system model as a map to keep track of relationships between objects and processes. “Reverse,” because the goal of biology is to understand and predict how organisms function. “Systems,” because this approach utilizes requirements and modeling to tie components into a system-level design, illustrating how the whole is more than the sum of its parts.

To Start with Literature Mining

Our approach, as in biology, begins with observations via literature mining. However, these observations are guided by classic systems engineering questions. Those include: (1) What requirements is this system meeting? (2) What are its interfaces? (3) What are the associated derived requirements? (4) What predictions can we make, whether at the system, sub-system, or component level, based on these derived requirements? From observations, our methodology shifts quickly into a more traditional systems engineering approach, where we infer requirements from observations and build a system model (in our case we used OPCloud). Building a system model starts with qualitative conceptual modeling and can be followed by more specific computational modeling. Conceptual modeling, to my surprise, is highly accessible to biologists. It is more like creating a map than it is like quantitative modeling. Yet it serves as a critical foundation for quantitative modeling since it sets relationships between objects and processes through a formal language. This also allows for errors to be identified early. Once the system model and requirements are developed, which often identifies key knowledge gaps since it is a methodical process, one can make predictions, test, and then validate experimentally and update the model and requirements based on observed results. This is an iterative process where the goal is to develop a list of requirements and a systems model that accurately reflect the biological system or organism.

A Case Study of Glycolysis

In our paper, to illustrate the utility of our approach, we use glycolysis as a case study. Glycolysis is reasonably well understood and is familiar to many non-biologists since most high school biology courses teach the basics of this crucial metabolic pathway.

Similarities and Differences in Glycolysis by Systems Engineering 

Before we talk about similarities and differences in glycolysis across different types of organisms, it’s important to define a term: topology. Topology refers to the overall metabolic sequence — i.e., the ordering of the pathway steps that lead from, say, glucose to ATP and the intermediates that are produced along this pathway. It has been noted for glycolysis that among different types of organisms there are both remarkable similarities (for example, most organisms use one of two topological patterns for catabolism of glucose, commonly the EMP or ED topology) and remarkable differences (while the topology is conserved, the DNA sequences of the enzymes used in the pathway are not). (Rivas, Becerra, and Lazcano 2018) The high degree of similarity for the topology of the pathway across different organisms led many to assume that the uniformity resulted from common ancestry, and also to expect a common ancestry pattern for the genetic sequences of the enzymes. But this hypothesis overlooked system requirement-based reasons for topological similarity. As we write in our paper:

Traditionally, uniformity has been attributed as an artifact of common descent, meaning uniformity resulted from a historical relationship between all living organisms and does not have functional importance. However, in systems engineering, uniformity at a low level in a system design is often an optimized solution to upper-level requirements. We therefore propose that the striking similarity in the topology and metabolites of glycolysis across organisms is driven by a requirement for compatibility between organism energy interfaces, aiming to maximize efficiency at the ecosystem level.

Fudge and Reeves 2024

Ecosystem requirements shape the design of organisms, which in turn influence the requirements of metabolic design, ultimately constraining the structure of lower subsystems like glycolysis. This is because higher-level system needs determine the architecture of the subsystems below them. For glycolysis, a need for ecosystem efficiency and optimization of energy catabolism is a hypothesis with increasing evidentiary support that best explains the uniformity of the glycolytic topology. First, ecosystem efficiency requires some level of biomass commonality to maximize thermodynamic efficiency in reusing complex molecules by minimizing the amount of required biomolecule break-down and rebuild. This also helps minimize waste buildup, as shared waste products simplify the maintenance of ecosystem homeostasis. Second, the glycolytic pathway is recognized as optimized for a number of key metabolic constraints, further supporting its uniformity across species.

Ebenhöh and Heinrich [40] showed that the glycolysis architecture with a preparatory phase followed by a payoff phase is highly efficient based on kinetic and thermodynamic analysis. Similarly, Court et al. [41] discovered that the payoff phase has a maximally efficient throughput rate. In 2010, Noor et al. [42] demonstrated that the 10 step glycolytic pathway is minimally complex, with all glycolytic intermediates essential either for building biomass or for ATP production. In fact, it turns out that glycolysis is Pareto-optimized to maximize efficiency while serving multiple, often competing, purposes. Ng et al. [43] published their analysis in 2019 by analyzing over 10000 possible routes between glucose and pyruvate to show that the two primary glycolysis variant pathways are Pareto-optimized to balance ATP production against biomass production while simultaneously minimizing protein synthesis cost.

Fudge and Reeves 2024

In contrast, the differences in glycolytic enzyme or transporter sequences amongst organisms seem to be due to lower subsystem design requirements and constraints, which are expected to reflect more organism-specific differences. In our paper, we discuss the example of mammalian glucose transporters, which have 14 subtypes, only four of which are well characterized. (Thorens and Mueckler 2010) Of the four, each plays a unique role in system level glucose control within the mammalian system. Thus, differences in glucose transporters are explainable by their tissue-adapted roles. Similarly, differences between the glycolytic enzymes themselves are poorly correlated with ancestry and have led to complete dismissal of the previous assumption that the pathway had a single evolutionary origin. (Rivas, Becerra, and Lazcano 2018) Instead, evidence continues to accumulate that glycolytic enzyme differences between organisms play functional roles due to the unique subsystem environments in which they are placed.

The Warburg Effect and Cancer Research

Using our system engineering approach, we also generated a hypothesis for the Warburg effect, which is a well understood phenomenon in many cancer types. Briefly, the Warburg effect is preferential use of glucose in cancer via upregulation (i.e., increased usage) of glycolysis even in the presence of oxygen. This is often thought to be a deleterious byproduct of cancer, but our paper proposes a new perspective. Our hypothesis is that the Warburg effect is a normal system response to local organism injury or other temporary situations that require rapid tissue growth, such as during certain early developmental stages. Cancer occurs when the signal to turn off rapid tissue growth fails. The downstream effect is the continued signal for upregulated glycolysis, hence the Warburg effect. From our paper: 

Under certain (currently unknown) conditions, the feedback control loop for injury response can be broken, resulting in an under-controlled or completely uncontrolled response. In other words, we hypothesize a cellular level failure in the control system that upregulates cellular processes for division including glycolysis such that the rate of glycolysis is unconstrained at the cellular level. Note that all four proposed functions of the Warburg effect, plus its ability to support cellular metabolism if the oxygen supply is interrupted due to local loss of normal blood flow, are beneficial for tissue repair after an injury where 1) there might be reduced oxygen, 2) faster cell division and local ATP energy supply is needed, and 3) more biomass is required. A similar situation can occur during early organism development when tissue growth is more rapid than in the adult stage, and in which the blood supply is developing simultaneously.

Fudge and Reeves 2024

To our surprise, in our literature search, we found little about the Warburg effect as a critical part of injury repair. An exception was Heiden et al., who suggested that the increased cellular division rate associated with the Warburg effect can be beneficial in tissue repair as well as in immune responses. (Vander Heiden, Cantley, and Thompson 2009) We propose that this could be a very important area for investigation. Research that focuses on feedback mechanisms in the control system responsible for the rate of glycolysis upregulation should be able to verify or falsify our hypothesis.

A Useful Design-Based Tool

Engineering is a design-driven field, born from the creativity of intelligent human agents. Many tools developed in the field have applications in biology. For example, the MBRSE approach overcomes a key challenge facing biology: many biological objects and processes are not linked to system-level requirements. Without these connections, a divide occurs between the structure of components and how they fit into the system’s function. On a personal note, one aspect of system modeling that I find particularly appealing is its use of formal relationships and structured language. Once you’re familiar with the tool, it becomes much easier to identify connections between subsystems or constraints, even when looking at a different system model. This offers a major advantage over the inconsistent, often free-form diagrams found in biology research papers, where each tends to differ from the next. Another benefit of systems modeling is that it organizes information from research papers in a structured, graphical manner. No matter how brilliant a researcher is, it’s impossible to keep track of information from thousands of papers. However, a systems model can do that. It’s remarkable that while these modeling tools are standard in engineering, they are largely absent from biological training, despite the clear benefit they offer in overcoming the inconsistencies of biological diagrams. 

Our reverse systems engineering approach is motivated by some key observations: 

Biological systems look as if they are designed; for example, Francis Crick cautions biologists about using evolutionary ideas to guide research because biological systems look designed though he thinks they evolved (Campana 2000). Even Richard Dawkins admitted in The God Delusion, “The illusion of design is so powerful that one has to keep reminding oneself that it is indeed an illusion.”
Biological systems have much in common with human engineered systems (Csete and Doyle 2002); and
Biological systems exhibit features such as modularity, robustness, and design re-use (Alon 2003) that are traditionally associated with good top-down engineering practices.
These observations suggest that from a pragmatic perspective, the best approach to reverse engineer biological systems will be to treat them as if they are the result of a top-down requirements-driven systems engineering process.

It is good news, then, that design-based tools and hypotheses play an increasingly prominent role in biology, offering a clear, coherent path to understanding biological complexity. From this understanding, more than a few deeper philosophical questions arise.

References

Alon, U. 2003. “Biological Networks: The Tinkerer as an Engineer.” Science (New York, NY) 301 (5641): 1866-67.
Campana, Joey. 2000. “The Design Isomorph and Isomorphic Complexity.” Nature Reviews Molecular Cell Biology, 149-53.
Csete, Marie E., and John C. Doyle. 2002. “Reverse Engineering of Biological Complexity.” Science (New York, NY) 295 (5560): 1664-69.
Fudge, Gerald L., and Emily Brown Reeves. 2024. “A Model-Based Reverse System Engineering Methodology for Analyzing Complex Biological Systems with a Case Study in Glycolysis.” IEEE Open Journal of Systems Engineering 2:119–34.
Lazebnik, Yuri. 2002. “Can a Biologist Fix a radio? — Or, What I Learned While Studying Apoptosis.” Cancer Cell2 (3): 179–82.
Rivas, Mario, Arturo Becerra, and Antonio Lazcano. 2018. “On the Early Evolution of Catabolic Pathways: A Comparative Genomics Approach. I. the Cases of Glucose, Ribose, and the Nucleobases Catabolic Routes.” Journal of Molecular Evolution 86 (1): 27–46.
Thorens, Bernard, and Mike Mueckler. 2010. “Glucose Transporters in the 21st Century.” American Journal of Physiology. Endocrinology and Metabolism 298 (2): E141-45.
Vander Heiden, Matthew G., Lewis C. Cantley, and Craig B. Thompson. 2009. “Understanding the Warburg Effect: The Metabolic Requirements of Cell Proliferation.” Science 324 (5930): 1029-33.

An interlude XXI

 

On the nexus of art and information.

 

Human civilization is a Greek tragedy?

 

I.D has always been mainstream

 Using AI to Discover Intelligent Design


Human senses are excellent design detectors, but sometimes they need a little help. In a recent case, AI tools were applied to aerial photographs of the Nazca plain in Peru. The algorithms, trained on known geoglyphs, were able to select hundreds of candidate sites with figures too faint for the human eye. Many of them, on closer inspection, turned out to indeed contain patterns on the ground indicative of purposeful manipulation by indigenous tribes that lived in the area long ago. 

Here is a case where humans used their intelligent design to create intelligently designed “machine intelligences” capable of detecting intelligent design. Even so, the scientists needed to use their innate design detection abilities to follow up on the AI results to validate the potential design detections. AI is a tool, not a thinker. As a tool, it offers new powers to archaeology: one of the examples of intelligent design in action in science.

The Nazca Pampa is designated a World Heritage Site by UNESCO because of its immense geoglyphs, averaging 90m in length. The well-known ones, consisting of lines, geometric figures and images of animals, were rediscovered in the early 20th century and have fascinated scientists and laypeople alike. UNESCO describes what makes them unique:

They are located in the desert plains of the basin river of Rio Grande de Nasca, the archaeological site covers an area of approximately 75,358.47 Ha where for nearly 2,000 uninterrupted years, the region’s ancient inhabitants drew on the arid ground a great variety of thousands of large scale zoomorphic and anthropomorphic figures and lines or sweeps with outstanding geometric precision, transforming the vast land into a highly symbolic, ritual and social cultural landscape that remains until today. They represent a remarkable manifestation of a common religion and social homogeneity that lasted a considerable period of time.

They are the most outstanding group of geoglyphs anywhere in the world and are unmatched in its extent, magnitude, quantity, size, diversity and ancient tradition to any similar work in the world. The concentration and juxtaposition of the lines, as well as their cultural continuity, demonstrate that this was an important and long-lasting activity, lasting approximately one thousand years.

Based on pottery fragments, the geoglyphs are dated to between at least 100 BC and possibly up to the 15th century. The spellings (Nasca vs Nazca) appear to be interchangeable. Mysteries remain about the purpose of geoglyphs, and various theories are debated. One thing is indisputable: they were designed by intelligent minds. The people made considerable effort to modify the landscape for whatever purposes that drove them. But that’s OK; ID theory can detect design without knowing the identity of the designer(s) or why they did their work. ID’s job is done when the Design Filter has ruled out chance and natural law to conclude something is the product of a designing intelligence. Discerning the purposes of designs like these are left in the capable hands of anthropologists, historians, and archaeologists, who may find themselves puzzled by some of the discoveries like the “knife-wielding killer whale” figure.

The New AI-Directed Discoveries

New detections of Nazca geoglyphs have continued slowly through the years. A team of Japanese, European, and American researchers, Sakai et al., publishing in PNAS, boasts that AI has accelerated the pace of new discoveries:

The rate of discovery of new figurative Nazca geoglyphs has been historically on the order of 1.5 a y (from 1940s to 2000s). It has accelerated due to the availability of remotely sensed high-resolution imagery to 18.7/y from 2004 to 2020. Our current work represents another 16-fold acceleration (303 new figurative geoglyphs during the 2022/23 season of field work) using big geospatial data technologies and data mining with the aid of AI. Thus, AI may be at the brink of ushering in a revolution in archaeological discoveries like the revolution aerial imaging has had on the field.

The Nazca geoglyphs can be classified as line-type (carved into the ground) or relief-type (made by aligning stones above ground). They can also be distinguished by subject matter and size. Sakai et al. surveyed the entire Nazca Pampa (629 km2), then subdivided aerial photographs with 10-cm resolution into grids. They trained their AI model on 406 relief-type glyphs and gave the AI some puzzles to solve:

To leverage the limited number of known relief-type geoglyphs, and to render the training robust, data augmentation is paramount. Hand-labeled outlines of known geoglyphs serve to pick 10 random crops from within each of the known geoglyphs. These are also randomly rotated, horizontally flipped, and color jittered. Similarly, 25 negative training images are randomly cropped from the area surrounding each known geoglyph. We set the ratio of positive to negative training images to 10:25 for a reasonable balance between precision and recall.

This method yielded 1,309 hotspots of likely geoglyphs, which the scientists classed as Rank I, II, or III from most to least likely. “Of the 303 newly discovered figurative geoglyphs,” the paper says, “178 were individually suggested by the AI and 125 were not individually AI-suggested.” It still required 2,640 labor hours of follow-up on foot and with drones to validate the AI selections. Nevertheless, this effort represented a quantum leap in design detection of glyphs with such low contrast they were barely visible to the unaided human eye.

New Scientist included photos of some of the new geoglyphs outlined for clarity. The new ones tend to be smaller and located near trails rather than larger roads, leading the scientists to surmise that they were intended for viewing by local groups instead of for community-wide religious rituals. Reporter Jeremy Hsu wrote about the need for human intelligence to corroborate the selections made by AI:

The researchers followed up on the AI suggestions and discovered a total of 303 figurative geoglyphs during field surveys in 2022 and 2023. Of these figures, 178 geoglyphs were individually identified by the AI. Another 66 were not directly pinpointed, but the researchers found them within a group of geoglyphs the AI had highlighted.

“The AI-based analysis of remote sensing data is a major step forward, since a complete map of the geoglyphs of the Nazca region is still not available,” says Karsten Lambers at Leiden University in the Netherlands. But he also cautioned that “even this new, powerful technology is more likely to find the better visible geoglyphs — the low hanging fruits — than the more difficult ones that are likely still out there”.

The authors believe that many more geoglyphs remain to be discovered in the area. Now that design has been concluded, we may understandably wonder what the people had in mind when they made these figures:

Line-type geoglyphs predominantly depict wildlife-related motifs (e.g., wild animals and plants). Most relief-type geoglyphs (81.6%) depict human motifs or motifs of things modified by humans (33.8% humanoids, 32.9% decapitated heads, and 14.9% domesticated camelids). These do not appear in the line-type figurative geoglyphs at all. Decapitated heads are sometimes depicted alone, while humanoids are repeatedly depicted with decapitated heads and together with domesticated camelids. Examples of both are shown as Insets to Fig. 5. Wild animals, which dominate the line-type geoglyphs, represent only 6.9% (47 geoglyphs) of the relief-type geoglyphs. These include bird, cat, snake, monkey, fox, killer whale, and fish.

Again, though, figuring out the meaning of the designs is not ID’s job. ID is equally valid at detecting evil designs and good designs. Undoubtedly future archaeologists might have trouble understanding 21st century graffiti if they happened upon a destroyed U.S. city without written records or history. But thanks to the Design Filter, determining whether contemporary “art” was designed or not would be a straightforward project