Search This Blog

Monday, 1 July 2024

Primeval nanotech vs. Darwin.

 Scientist Discovers a Protist’s “Cellular Origami” — The First Known Case


Sometimes evidence for design is subtle and arcane, discernible only through careful logic and mathematical analysis. 

At other times, the exquisite design of life just seems to hit you over the head. That’s how I felt when I saw the cover illustration for the June issue of Science. The illustration depicts the single-celled protist Lacrymaria olor in a state of expansion and a state of contraction: the first ever known case of “cellular origami.” 

The discovery came from the lab of Stanford’s Manu Prakash, who spent seven years uncovering the folding/unfolding mechanism. Stanford Report does a good job describing the mesmerizing beauty of it: 
   …a single teardrop-shaped cell swims in a droplet of pond water. In an instant, a long, thin “neck” projects out from the bulbous lower end. And it keeps going. And going. Then, just as quickly, the neck retracts back, as if nothing had happened. 

In seconds, a cell that was just 40 microns tip-to-tail sprouted a neck that extended 1500 microns or more out into the world. It is the equivalent of a 6-foot human projecting its head more than 200 feet. All from a cell without a nervous system.
                        This “incredibly complex behavior,” as Dr. Prakash says, is derived from literal origami. The structure of the cell membrane is folded in a “curved crease origami” style that allows it to extend and retract consistently — 50,000 times in the lifetime of a protist, without any errors. 
    
Destined for Origami?  

Origami seems to be following Dr. Prakash. As chance would have it, before he discovered the origami of Lacrymaria olor, he had already used origami in his own engineering designs. (Or maybe his experience with origami made him ready to recognize it when he encountered it in nature?) Prakash invented an origami microscope, dubbed a “foldscope,” that costs only $1.75 to produce. In 2014, he mailed 50,000 foldscopes to recipients all around the world. His aim was to inspire and empower people to begin doing science in far-flung places where expensive and unwieldly lab equipment is impractical. A New Yorker piece lists some delightful outcomes of his project:
              A plant pathologist in Rwanda uses the Foldscope to study fungi afflicting banana crops. Maasai children in Tanzania examine bovine dung for parasites. An entomologist in the Peruvian Amazon has happened upon an unidentified species of mite. One man catalogues pollen; another tracks his dog’s menstrual cycle.
                               A few years back, Prakash himself used his invention to discover a different amazing design feature in nature. He was looking at marsh water through his foldscope when he witnessed a single-celled Spirostomum suddenly contract to a fraction of its original size. Prakash discovered that Spirostomum are able to contract in response to danger in just 5 milliseconds, and the resulting ripples in the water trigger other nearby Spirostomum cells to do the same in a rapid domino effect — a previously undiscovered form of intercellular communication. 

Biology and Engineering 

You will probably not be surprised to hear that Dr. Prakash is an engineer as well as a biologist. This is predictable, because engineers tend to have a design-oriented mindset that is very well suited to discovering the design plans of living organisms. Prakash and his lab attack biology problems like engineers studying the artifacts of a more advanced civilization, tracking the tiniest movements of microbes in the lab to uncover the underlying mechanisms that enables them to function the way they do.

So it’s also not surprising that Prakash is dreaming of design applications for what he’s seen in Lacrymaria olor. Prakash thinks that tiny machines based on the design of Lacrymaria olor could be used for telescopes and surgical robots, among other applications. 

It wouldn’t be the first time Prakash has copied ideas from life. According to the New Yorker piece, Prakash molds the lenses of his foldscopes using a device he created based on the beak of a red-necked phalarope, a bird that moves its beak in a “rapid tweezing motion” to mold droplets of food and water into aspherical shapes before swallowing them.

Life and Art

It’s a never-ending story: engineers uncover the engineering of nature by drawing analogies to human feats of engineering, and what they see in nature inspires them to engineer new innovations, which are used to uncover new engineering features in nature…and on and on. 

Art imitates life, and life imitates art, and at some point the distinction between the two becomes blurry. Where does one end and the other begin? 

Maybe the line is imaginary. To call the structure of Lacrymaria olor “origami” is not merely to draw a comparison — it really is origami. In fact, when Prakash and his team refer to it as “curved crease origami,” they are referring to a specific type of origami that originated in the Bauhaus art school in Germany in the late 1920s. 

Little did those German origamists know, they’d been beaten to the punch. Oh, well. Perhaps the best any human artist can do is imitate the Greater Artist. 

Vitalism returns?

 

Intelligent Design 101

 Introduction to the Scientific Theory of Intelligent Design


Author’s note: This introductory article on intelligent design first appeared June 19 in Polish at Fundacja En Arche’s ID Website

To understand the origins of the modern intelligent design movement, you must first understand that Darwin’s implausible explanation for evolution has become more and more implausible with every new biological and biochemical discovery, and that there never has been a plausible natural explanation for the origin of life on Earth. Here are some useful places to start, to understand this. One is this Article by David Klinghoffer which reviews a June 2022 article in The Guardian entitled “Do we need a new theory of evolution?” My own 2000 opinion piece in The Mathematical Intelligencer, “A Mathematician’s View of Evolution,” and the video “Why Evolution Is Different,” may also be useful. 

The second thing you need to understand is that for many years the scientific establishment has insisted that no matter how implausible Darwin’s explanation might have become, the alternative of design cannot be considered because it is a religious idea. And for many years, most public challenges to Darwinism were in fact attempts to force science to fit a literal interpretation of the early chapters of Genesis. In the first creation-evolution debate I ever attended, in the 1970s, the creationist spent much of his time arguing for a young Earth, as though that were the main issue.

Good Logic, Good Science

But toward the end of the last century a few scientists (biochemist Michael Behe and geneticist Wolf-Ekkehard Lӧnnig, for example) began to argue that it has become so obvious that life cannot be explained without design that “intelligent design” has to finally be taken seriously in the scientific world. While other religious beliefs based on the Bible or our experience or intuition may not be science, the conclusion that there must be a designer behind living things is just good logic and thus good science, even if science alone cannot tell us who designed life, or how. If scientists can spend time and money developing tools and algorithms to detect dubious signs of extraterrestrial intelligence in weak signals from outer space, why are they required to ignore the evidence in living cells where design practically leaps out at you?

Evolution Is Different

Of course, normally if a scientific theory for some observed phenomenon fails, we just look for an alternative “natural” theory. But what has long been obvious to the layman is finally becoming clear to many scientists, that evolution is different. We are not talking now about explaining earthquakes or comets or volcanos, we are talking about explaining hearts and lungs and eyes and ears. How many theories without design can there be for the origin of circulatory systems, nervous systems, and human brains? Design has finally started to be taken seriously by scientists not because there are minor problems with Darwin’s explanation, but because it has become absurdly, blindingly obvious that neither it nor any other theory that ignores design will ever completely explain living things. Contrary to common belief, science really has no reasonable alternative to design to explain either the origin or evolution of life. In fact, we really have no idea how living things are able to pass their current complex structures on to their descendants without significant degradation, generation after generation, much less how they evolve even more complex structures. 

If you look closely, you will notice that all the most persuasive arguments used to reject design are not of the form “here is a reasonable theory on how it could have happened without design” but rather “this doesn’t look like the way God would have done things,” an argument used frequently by Darwin himself. In the debate I mentioned earlier, the evolutionist spent much of his time showing dozens of beetle species, sarcastically concluding “God must really like beetles.” Well, I’ll admit I might not have predicted God would design so many species of beetles and there are other things about the history of life on Earth — the long times involved, for example — that to our minds seem to suggest natural causes, but no clue as to how it could have all happened without design. 

In the 2000 Mathematical Intelligencer piece (highlighted in the video “A Mathematician’s View of Evolution”) I compared the history of my partial differential equation software to the history of life, noting that there are large jumps in both where major new features appear, for the same reasons: gradual development of the new organs or new systems of organs that gave rise to new orders, classes, and phyla would require the development of new but not yet useful features. So, Darwinism could not explain the development of these new features even if they did occur gradually — and, according to the fossil record, they don’t. But I have always felt that the strongest argument for design is simply to state clearly what you have to believe to NOT believe in intelligent design, and I closed the article with this:
               I imagine visiting the Earth when it was young and returning now to find highways with automobiles on them, airports with jet airplanes, and tall buildings full of complicated equipment, such as televisions, telephones and computers. Then I imagine the construction of a gigantic computer model which starts with the initial conditions on Earth 4 billion years ago and tries to simulate the effects that the four known forces of physics (the gravitational, electromagnetic and strong and weak nuclear forces) would have on every atom and every subatomic particle on our planet (perhaps using random number generators to model quantum uncertainties!). 

If we ran such a simulation out to the present day, would it predict that the basic forces of Nature would reorganize the basic particles of Nature into libraries full of encyclopedias, science texts and novels, nuclear power plants, aircraft carriers with supersonic jets parked on deck, and computers connected to laser printers, CRTs, and keyboards? If we graphically displayed the positions of the atoms at the end of the simulation, would we find that cars and trucks had formed, or that supercomputers had arisen? Certainly we would not, and I do not believe that adding sunlight to the model would help much. Clearly something extremely improbable has happened here on our planet, with the origin and development of life, and especially with the development of human consciousness and creativity.
          
Not a Real Physical Force 

Of course, constructing such a model is impossible, but I thought imagining it was a useful exercise to get across the point that natural selection, the one unintelligent force in the universe widely credited with the ability to create spectacular order out of disorder, is not a real physical force and cannot be included in the simulation, and the point that unintelligent forces cannot explain human intelligence. Rice University chemist James Tour makes a similar point regarding the origin of life: “Molecules don’t care about life.”

Furthermore, even many of the scientists who insist that everything must be explained in terms of the unintelligent laws of nature alone have been forced by the evidence uncovered in the last half century to accept that design is required to explain the spectacular fine-tuning for life of the laws and constants of physics themselves. These scientists are sometimes considered to be intelligent design supporters as well. One of the three discoveries discussed in Stephen Meyer’s book Return of the God Hypothesis: Three Scientific Discoveries that Reveal the Mind Behind the Universe is this well-documented fine-tuning. Notice the long list of distinguished scientists who have formally endorsed the book, including physics Nobel Prize-winner Brian Josephson who writes, “This book makes it clear that far from being an unscientific claim, intelligent design is valid science.”

King of birds indeed.

 

Sunday, 30 June 2024

Rags to riches but not in a good way.

 

Return of the compact disc?

 

Human devolution?

 Neanderthals Cared for Down Syndrome Children


Scientists have discovered the remains of a Neanderthal child with Down syndrome. From the Guardian story:
                          The  survival of this child, beyond the period of breastfeeding, implies group caregiving, probably more extended than parental caregiving, typical of a highly collaborative social context among the members of the group. Otherwise, it is very difficult to explain the survival of this individual up to the age of six years,” said Valentín Villaverde, a co-author of the study and an emeritus professor of prehistory at the University of Valencia.

Conde-Valverde said: “The discovery of Tina represents the oldest known case of Down’s syndrome and demonstrates that the diversity observed in modern humans was already present in prehistoric times. This finding ensures that the story of human evolution includes us all.”
    
Consequences of “Enlightenment”

Such compassion is on the outs in these “enlightened” days. Indeed, it may now be that more babies with Down syndrome are killed in the womb than are born. Indeed, countries such as Iceland have all but eliminated citizens with Down syndrome via prenatal testing and almost universal terminations. Denmark, too.

In the U.S., the numbers are tricky. Some studies claim that the majority of such babies are terminated during gestation. But even if the actual rate is lower, at the very least, the number of people with Down syndrome has been reduced by at least 30 percent with the advent of prenatal testing. And often, genetic counselors push the termination option. Peter Singer argues that if such babies are born, parents should be allowed to have them killed — and yet, despite this bigotry, he is the most celebrated bioethicist in the world.

What a tragedy for these dead precious babies and for us. Perhaps we could learn something from our ancient relatives.

Yet more preDarwinian design vs. Darwinism.

 “Irreducible Complexity” May Be Part of the Definition of Life


There are many bad counter-arguments to Michael Behe’s famous irreducible complexity conundrum, and (in my opinion) one pretty good one. 

For those unfamiliar with Behe’s argument, it goes like this: 
                        Darwinian evolution is supposed to build complex systems gradually, overcoming vast improbabilities in tiny steps over billions of years. But, strangely, many systems in living organisms are “irreducibly complex” — they contain a core set of key elements that are all absolutely necessary for the system to function at all. Gradual evolution through random variation and natural selection could never build such a system, because the system would have no adaptive function until it was already completely finished.
                 After Behe made this case in his 1996 book Darwin’s Black Box, scientists (and non-scientists) scrambled to rebut him. Some argued that the systems in question weren’t really irreducibly complex; others, that they could have arisen through cooption of parts from other systems; others, that they emerged as reductions from larger complex systems that were not irreducibly complex… and so on. 

None of those arguments have held up to logical or empirical scrutiny. But I don’t think those argument are the real reason that most who find Behe’s argument unpersuasive find it unpersuasive. I suspect that the real objection for most people is something more gut-level and foundational, which might be expressed something like this: 
                               Okay, so maybe it’s hard to see how gradual, blind processes could produce a few special systems like the bacterial flagella. Because of irreducible complexity — got it. But Darwin’s theory still makes sense for everything else. So are we really going to throw out the whole theory on the basis of a few things we can’t explain? Isn’t it more likely that there’s some explanation for these things, and we just have to wait for it?1
                                   After all, if Darwinian evolution works in theory, then it seems to follow that Darwinian evolution should have happened. And then, if living organism don’t look like they were made by Darwinian evolution, the question just becomes, “So where the heck are the things that were made by Darwinian evolution?” Even if the presence of irreducible complexity shows that all the organisms we study didn’t arise by Darwinian evolution, it doesn’t explain why they didn’t arise by Darwinian evolution.
       
Confusion and Mystery

In other words, for the irreducible complexity argument to persuade someone away from Darwinism, it’s not enough to show that some structures in living organisms don’t look like they were made from unguided Darwinian processes. As long as unguided Darwinian processes work in theory, the existence of irreducibly complexity in life may add confusion and mystery, but it doesn’t do away with the theory. For the argument to be really convincing, you need to also show that Darwinism doesn’t actually work to construct living organisms even in theory. 

Might this be the case? Well, it would be the case if irreducible complexity is actually necessary for living systems. If something needs to be irreducibly complex in order to achieve the characteristics that would make us call it “alive,” then Darwin’s theory doesn’t even work in theory, and the mystery is solved — we see features that Darwinian evolution can’t explain simply because Darwinian evolution didn’t actually happen, and can’t happen. 

Behe argued something like this in response to the criticisms of his first book. But it was at first an open question — there is no quick-and-easy way to tell if irreducible complexity is intrinsic and necessary to life, or not. 

It’s extremely interesting, then, that the prominent theoretical biologist Stuart Kauffman has been promoting a definition of life that entails irreducible complexity — though Kauffman (who is unsympathetic towards ID) doesn’t use that term.

A Definition of Life 

Kauffman has been arguing that what sets living organisms apart from non-living things, and what makes them able to function and to evolve, is that in living organisms the parts exist for and by means of the whole. Kauffman calls such systems “Kantian wholes” (because the idea comes from Immanuel Kant’s Critique of Judgement). A Kantian whole, to put it another way, is a self-creating system in which everything supports and depends upon everything else. 

It’s easy to see how living organisms fit this definition. Your various parts can’t exist without you — you’ll never find a brain or a spleen lying around on its own (at least, not for very long). Likewise, you wouldn’t exist if you didn’t have those parts (at least, not for very long). 

It’s also easy to see that such a system is by definition irreducible complex. The “whole” — by definition — encompasses all of the parts. So, if the whole is necessary for the continued existence of the parts, then all of the parts are necessary for the continued existence of the parts — which is the definition of irreducible complexity. Not all irreducibly complex systems are necessarily Kantian wholes, but Kantian wholes are necessarily irreducibly complex. 

Irreducible Complexity in LUCA

Of  course, someone will probably point out that this is all very interesting philosophizing, but science is about empirical evidence. And Kauffman, as a scientist, is eager to provide it. To this end, he co-authored (with the up-and-coming origin of life researcher Joana Xavier and others) a paper published in Proceedings of the Royal Society B which seemed to show that life has existed in the form of Kantian wholes as far back in evolutionary history as we can see. 

Xavier et al. took a database of metabolic reactions in bacteria and archaea (the two domains of the simplest lifeforms) and looked at which reactions they had in common. They found in the intersection of bacteria and archaea a collectively autocatalytic set of 172 reactions. (“Collectively autocatalytic” means that the set of reactions is self-creating — all the catalysts of the reactions in the set are created by other reactions in the same set; e.g. A creates B, B creates C, C creates A.) From a phylogenetic perspective, this implies that the common ancestor of bacteria and archaea — and thus presumably the “last universal common ancestor” (LUCA) itself — was characterized by complex autocatalytic metabolic cycles. In a paper in the volume Evolution “On Purpose”: Teleonomy in Living Systems, Kauffman and his colleague Andrea Roli write that these findings “very strongly suggest that life arose as small-molecule collectively autocatalytic sets.” 

Kauffman and his co-theorists believe that collectively auto-catalytic sets are Kantian wholes. Therefore, they argue that life has been characterized by Kantian whole-ness from the very beginning, in accordance with Kauffman’s contention that living things are Kantian wholes by their very nature. If that’s true, then — as we have seen — that means that life, by its very nature, is irreducibly complex. 

What Was the Question?  

If irreducible complexity really is part of the definition of life, this solves the problem raised in the response to Behe’s irreducible complexity argument. 

It all comes down to What is it that we’re trying to explain? when we invoke evolution or design. Why does life need an explanation at all? What is it that makes people, cows, mushroom, pine trees, bacteria, and so forth, so very perplexing to us?

Darwin seemed to think the problem was mere complexity, or the adapted-ness of organisms to their environment. That seems plausible at first glance, but in retrospect we should have known that it isn’t the case. A pile of sand is complex — the odds of obtaining that exact same arrangement of grains of sand a second time are almost nil — but nobody thinks that the existence of piles of sand is some big mystery. 

No, the thing that makes living organisms so mysterious (one of the things that makes them mysterious, anyway) is that they are irreducibly complex: they move, act, reproduce, and grow by means of an elaborate system of interconnected, interworking parts. It’s obvious (with 20-20 hindsight) that this is the real mystery in need of explanation, and it’s equally obvious that the ability of natural selection to pile up tiny, individually useful random variations in no way explains (or even attempts to explain) how such an intricate network could come to be.

So when Behe pointed to irreducible complexity, he wasn’t noticing some random, inexplicable feature of certain biological systems and using it to attack Darwin’s theory. Rather, he was putting his finger on what exactly it is about life that makes us feel it needs explaining. And that turned out to be something about which Darwin’s insights, brilliant though they were, had nothing to say.

Notes

For example, this line of thinking has got to be why evolutionary biologist Bret Weinstein feels that “if we pursue that question [a particular problem raised by ID proponents], what we’re going to find is, oh, there’s a layer of Darwinism we didn’t get and it’s going to turn out that the intelligent design folks are going to be wrong” — even though he admits that ID proponents are pointing to genuine holes in the current theory of evolution. 

Saturday, 29 June 2024

Yet another of the fossil records many big bangs.

 Fossil Friday: Snake Origins —Yet Another Biological Big Bang


This Fossil Friday features the “legged” snake Najash rionegrina from the Late Cretaceous of Patagonia, which is one of the oldest fossil snakes known to science. It was found in terrestrial sediments and shows a well-defined sacrum with pelvis connected to the spine and functional hind legs. Therefore it was considered as supporting an origin of snakes from burrowing rather than aquatic ancestors (Groshong 2006). I had reported about the highly controversial and hotly debated topic of snake origins in a previous article (Bechly 2023), where you can find links to all the relevant scientific literature.

Another Open Question

But  there was another open question concerning the origin of snakes: Did their distinct body plan evolve gradually as predicted by Darwinian evolution, or did snakes appear abruptly on the scene as predicted by intelligent design theory? Earlier this year a seminal new study was published by a team of researchers from the University of Michigan and Stony Brook University in the prestigious journal Science (Title et al. 2024). This study brought important new insights with the mathematical and statistical modelling of the most comprehensive evolutionary tree of snakes and lizards, based on a comparative analysis of the traits of 60,000 museum specimens and the partial sequencing of the genomes of 1,000 species (SBU 2024, Osborne 2024). The study found that all the characteristic traits of the snake body plan, such as the flexible skull with articulated jaws, the loss of limbs, and the elongated body with hundreds of vertebrae, all appeared in a short window of time about 100-110 million years ago (Rapp Learn 2024).

The authors commented in the press releases that this burst of biological novelty suggests that “snakes are like the Big Bang ‘singularity’ in cosmology” (SBU 2024; also see Cosmos 2024,Osborne 2024, Sivasubbu & Scaria 2024, Wilcox 2024). This arguably would imply that snakes became “evolutionary winners” because they evolved “in breakneck pace” (Wilcox 2024), which the senior author of the study explained with the ad hoc hypothesis that “snakes have an evolutionary clock that ticks a lot faster than many other groups of animals, allowing them to diversify and evolve at super quick speeds” (Osborne 2024). Well, that is not an explanation at all, but just a rephrasing of the problem. How could such a super quick evolution be accommodated within the framework of Darwinian population genetics and thus overcome the waiting time problem? After all, the complex re-engineering of a body plan requires coordinated mutations that need time to occur and spread in an ancestral population. Did anybody bother to do the actual math to check if such a proposed supercharged evolution is even feasible, given the available window of time and reasonable parameters for mutation rates, effective population sizes, and generation turnover rates? Of course not. We just have the usual sweeping generalizations and fancy just-so stories.

The Fatal Waiting Time Problem

My prediction is that this will prove to be another good example of the fatal waiting time problem for neo-Darwinism. In any case we can add the origin of snakes to the large number of abrupt appearances in the history of life (Bechly 2024), and I am happy to embrace the name coined by the authors of the new study for this remarkable event: The macroevolutionary singularity of snakes. This does not sound very Darwinian, does it? So what do the authors suggest as causal explanation? They have none and the press release from Stony Brook University (SBU 2024) therefore concludes with this remarkable admission: “The authors note that the ultimate causes, or triggers, of adaptive radiations is a major mystery in biology. In the case of snakes, it’s likely there were multiple contributing factors, and it may never be possible to fully define each factor and their role in this unique evolutionary process.” It other words, it was a biological Big Bang and they have no clue what caused it. But of course it must have been unguided evolution, no intelligence allowed!

References

Friday, 28 June 2024

AI is on the verge of becoming a grey goo problem?

 

JEHOVAH'S folly trumps man's wisdom ?

 New Paper on the Panda’s Thumb: “Striking Imperfection or Masterpiece of Engineering?”


Readers are invited to consider my new paper,   “The Panda’s Thumb: Striking Imperfection or Masterpiece of Engineering?The abstract is below.

Abstract: Key Points of the Contents 

Before going further, a brief note on the synonyms that I’m using here such as the “double/dual/complementary function” of the panda’s thumb. Each of the synonyms has its own subtly different overtones. With this in mind, I hope the basic points discussed below may be better understood. 

Above: “Some Key Points in a Long-Lasting Controversy”: Different views of evolutionary biologists on the panda’s thumb. Some assessments of the panda’s dexterity by intelligent design theorists.
Introduction: The panda’s thumb has become a paradigm for evolution in general. Links to articles by Stephen Dilley, and notes on the recent controversy between Nathan Lents and Stuart Burgess.
If the panda’s thumb is an embodiment of bad design, where are the evolutionists’ proposals indicating how they could have done better?
Some citations from a public talk by Stuart Burgess on the ingenious design of the wrist.
A massive contradiction within the theory of evolution itself.
Double/dual/complementary function is often overlooked.
“What makes the modern human thumb myology special within the primate clade is … [the appearance of] two extrinsic muscles, extensor pollicis brevis and flexor pollicis longus.”
It is a fundamental mistake to use the human thumb as a yardstick for the perfection or imperfection of the panda’s thumb.
A closer look at the differences of the radial sesamoid in a basal ursoid in comparison to that of the panda (Ailuropoda) for gripping and walking and the grasping hand of Homo sapiens according to Xiaoming Wang et al. (2022).
In comparison to other bear species, “only in A. melanoleuca can it be considered to be hyper-developed, reaching a similar size to that of the first metacarpal.”
Doubts concerning a simple homology of different sesamoid bones in various species.
Radial sesamoid as the ideal starting point to develop a thumb-like digit in pandas.
Natural selection of the radial sesamoid according to Wang et al. as well as Barrette in contrast to Stanley.
Implications of the ruling neo-Darwinian paradigm (gradualism plus natural selection) for the origin of the panda’s thumb.
Further discussion of Barrette’s points as “the length of the radial sesamoid, and therefore that of the false thumb, is limited firstly by its location under the hand,” etc.
Less efficient feeding would emphasize the enormous problem involved in the theory of natural selection.
The panda’s ecological impact and the “Optimal Panda Principle” in contrast to the evolutionary “Panda Principle” of Gould and his followers.
How to pick up little Necco candy wafers with thumbless mittens?
When directly observing pandas in zoos, Gould and Davis marveled at the dexterity/competence/virtuosity of the panda’s hand. I have done so, too. The panda’s hand is not “clumsy” at all.
Key question from two PhD students at the Max Planck Institute of Plant Breeding Research (Cologne) who came to my office and asked: Wouldn’t it be much more economical for an intelligent designer to modify, as far as possible, an already existing structure for some new functions than to create a totally new structure for similar roles/purposes/tasks from scratch?
Some comments on Barette’s statement that “We owe this metaphor [of approximate tinkering/bricolage] to François Jacob, a French biologist and recipient of the Nobel Prize. Far from being perfect, such approximate tinkering is a trace left by evolutionary history,” and thus a proof of it.
Davis on the enlarged radial sesamoid as “unquestionably” a direct product of natural selection.
Possible number of genes involved in the origin of pandas according to Davis and some others.
What do we know in the interim about panda genetics?
SNPs in the Ursidae including our beloved pandas.
As already mentioned in other articles of mine (for example: https://www.weloennig.de/Hippo.pdf): Note please that virtually all highlighting/emphasis is by W.-E. L. (except italics for genera and species as well as adding a note when the cited authors themselves have emphasized certain points). Why so often? Well, since many people do not have the time to study a more extensive work in detail, these highlights can serve as keywords to get a first impression of what is being discussed. 

Concerning the key points enumerated above: Page numbers may change in a future update, and so are not presented here. Incidentally, citations do not imply the agreement of the authors quoted with my overall views nor vice versa. Moreover, I alone am responsible for any mistakes.

On questions concerning absolute dating methods, see http://www.weloennig.de/HumanEvolution.pdf, p. 28. 


Thursday, 27 June 2024

technology of the zygote vs. Darwin.

 Let’s Think About a Zygote Like an Engineer


Having read Evolution News for years, contributing an occasional article or two, in addition to my 81-part series on “The Designed Body,” I’ve noticed that there’s a certain way we proponents of intelligent design tend to frame our arguments. We usually provide information on what it takes for life to work, rather than just how it looks (per much of neo-Darwinism). Then we look for reasonable explanations of causation which must include where the information came from to properly produce, assemble, and coordinate all the necessary parts of a given system that we know is absolutely needed for survival (most of that is absent from neo-Darwinism).

But in my collaboration with Steve Laufmann to produce our book Your Designed Body, we came to the conclusion that a different style may be more useful. What we propose is that, in addition to what’s described above, we also engage readers with examples of “problem-solving” just like engineers do it. After all, it takes one to know one. If you’ve never used mental energy to try to solve any one of these hard problems of life, then how can you appreciate what it took to come up with and apply the solution? 

Let’s try the following as an exercise. Once you’ve gone through it, you’ll be better prepared to understand all the causal hurdles that had to have been surmounted. And this will allow you to ask better questions and not be as vulnerable to many of the “just so” stories of neo-Darwinism. 

“Separation of Concerns”

Recently, there was an article in The Scientist, “The First Two Cells in a Human Embryo Contribute Disproportionately to Fetal Development.” It noted a study published in Cell, “The first two blastomeres contribute unequally to the human embryo,” indicating that “a research team showed that, contrary to current models, one early embryonic cell dominates lineages that will become the fetus.” 

The gist of the article was that the current thinking — that it’s at the eight-cell stage where totipotent embryonic cells take the first “fork in the road” of commitment to developing into the fetus or the placenta — may be incorrect. It would now seem that this first “separation of concerns” (as Laufmann and I call it) may take place earlier on, when the zygote divides into the first two blastomeres. 

Ingenious methods were used to label and track the cell lineage from the two-cell to the blastocyst stage:“Thus, they could determine the contribution of each cell to the development of two early structures: the trophectoderm (TE) that becomes the placenta and the inner cell mass (ICM) that eventually produces the fetal tissue.” 

“They are not identical,” explains Magdalena Zernicka-Goetz, a developmental and stem cell biologist at Caltech and the University of Cambridge who is a study co-author. “Only one of the two cells is truly totipotent, meaning it can give rise to body and placenta, and the second cell gives rise mainly to placenta.” She adds, “I was always interested in how cells decide their fate.” The article in The Scientist concludes by telling us that “next, Zernicka-Goetz aims to investigate the features and origins of the differences between clones at the two-cell stage.”

Points to Ponder

It is clear that scientists still do not fully understand how human life develops from the zygote to a newborn and then into a mature fertile adult. One has to wonder what signaling and communication must take place at exactly the right times and in the right orders for all of this to happen properly, never mind where the information and instructions came from. Despite this self-acknowledged lack of understanding, we are told by evolutionary science that it certainly was an unguided and undirected natural process that brought it into being, and not a mind at work, as intelligent design contends. 

What do you think? If you took your car to a mechanic and he told you that he has no idea what’s wrong with it but he’s sure he can fix it, would you engage his services? Just because the scientist is smarter than you about what parts do what and how, that doesn’t necessarily mean that her conclusions about causation are true. After all, in saying that “I was always interested in how cells decide their fate,” she’s attributing agency, a mind at work, to the zygote. So don’t be misled.

Human Life Is a Hard Problem

Actually, life is a series of millions of hard problems that have to be solved all the time, or else. I’m talking about, among many other things, the cellular, metabolic, anatomical, and neuromuscular problems of human life. Let’s start from square one — the human zygote — how each of us began after the sperm of our father joined with the egg of our mother within her womb. That is the one cell from which, within nine months, we developed into a three-trillion-cell newborn with all the equipment we needed to survive. 

If you could go back in time to that moment in your life, be nanosized and micro-pipetted into your own first cell, what would be the first problem you’d have to solve? In other words, once the zygote comes into being, what’s the first thing it has to do? 

Well, if it’s going to become a newborn in nine months or so, it’s got to start dividing. But that won’t happen for at least 24 hours, so you have to consider what else may be more important as the zygote floats within the fluid of your mom’s uterus.

The chemical content of the fluid inside the zygote (high potassium, low sodium) is the opposite of what’s in the fluid surrounding it (low potassium, high sodium). And because these ions can cross the cell membrane, diffusion would naturally make them try to equalize on both sides (inside and outside the zygote) which would spell disaster. So, the sodium/potassium pumps in the zygote’s cell membrane have to kick in right away to keep pushing sodium out and bringing potassium back in, right?

Yes, the action of the million or so sodium/potassium pumps in the zygote’s cell membrane are needed for it to stay alive. But what do they need to do their work?

All work requires energy. So, as with all of life, the first priority of the zygote is to generate enough energy through glycolysis (without oxygen) and cellular respiration (with oxygen). The zygote needs oxygen and glucose (or other substances) to metabolize to get the energy it needs.

And if the zygote’s going to divide into two cells, then four, eight, sixteen, and more, then it’s also going to need nutrients to be able to make more copies of itself. Where does the new human life get the oxygen and nutrients it needs, and how does it make sure of its supply until it becomes a newborn? 

The Engineering Problem

This is how Steve Laufmann and I framed this engineering problem in our book:

All cells need oxygen and nutrients. Early life is no exception. Fertilization results in a zygote, which multiplies through cell division to become an embryo. In the early phase, the embryo gets what is needed by diffusion from the surrounding fluid. This works when there’s only a few dozen cells. But within several weeks the embryo will grow into a fetus, and in a few months into a newborn with trillions of specialized cells organized into coherent, interdependent, finely tuned organ systems. For this to be possible, the embryo needs a better way to get oxygen and nutrients, and to get rid of carbon dioxide and waste materials. If he cannot meet this challenge, he will not survive. But he’s in a special situation, dwelling inside his mother, so he’ll need a solution altogether different from anything else in the body’s inventory — a distinct yet temporary system that can meet this need while he’s developing his permanent internal systems.

We go on to ask a very important engineering question:

How do you build a series of finely tuned, coherent interdependent systems, each necessary for life, and stay alive the whole time? It just wouldn’t do if the body needed to go dead for a while, build some stuff, then come back to life when everything was ready to go. What the child in the womb needs is a complete set of temporary systems to meet the needs of his rapidly growing body, to keep it alive until its own systems are ready to take over. Then at birth, when they are no longer needed, these systems must be discarded as the child transitions to long-term systems.

The Solution Is the Placenta

The  answer to the very hard engineering problem we asked above is the placenta. Somehow or other the zygote has the foresight to know that down the road it will develop into a fetus that requires the placenta for its metabolic and nutritional needs. 

This is how we explain the solution in our book:
           Tissues of the embryo (TE) combine with tissues of the mother (lining of the uterus) to make the placenta — a totally separate organ that provides the scaffolding needed to keep the developing child alive. The placenta enables the mother to sustain the developing child while his internal organ systems and tissues are being fabricated, integrated, and launched. The developing child is, quite literally, on life support between the zygote phase and birth, when his body is finally ready to take over the job.

Up until this recent study it was thought that it’s not until the embryo consists of at least eight cells that some of them start to commit to being part of the placenta (TE). But now it seems that it takes place at the two-cell stage. If your nanosized self is inside the zygote, which lever do you pull to make sure that one of the two forming blastomeres goes down the TE-track? And even more important, where did the lever come from? 

It appears that, based on the findings of this study, the answer to the first question will be the concern of future research. But since, as we are regularly assured, we all know that life came about from the unguided and undirected processes of natural selection acting on random variation, the second question is assumed to have already been answered back in 1859, before we knew any of these intricate particulars and when biological systems were assumed to be vastly simpler than they turned out to be. What do you think? Any questions?

More light ,less heat re:dark matter?

 

The plague of plagues?

 

Predarwinian design vs. Darwinism

 Life Can’t Exist Without Repair Mechanisms, and That’s a Problem for Origin-of-Life Theories


A cell is often described as a factory — a quite extraordinary factory that can run autonomously and reproduce itself. The first cell required a lengthy list of components, layers of organization, and a large quantity of complex specified information, as described by previous episodes of Long Story Short. The latest entry in the series emphasizes yet another requirement for life: an abundance of specific repair mechanisms. 

Damage to the “factory” of the cell occurs on two levels: damage to the stored information (either during replication or by natural degradation over time) and damage to the manufacturing machinery (either from faulty production of new machinery or damage incurred during use). Each type of damage requires specific repair mechanisms that demonstrate foresight — the expectation that damage will occur and the ability to recognize, repair and/or recycle only those components that are damaged. All known life requires these mechanisms. 

Damage to Stored Information

The initial process of DNA replication is facilitated by a polymerase enzyme which results in approximately one error for every 10,000 to 100,000 added nucleotides.1 However, no known life can persist with such a high rate of error, if left uncorrected.2 Fortunately, DNA replication in all life includes a subsequent proofreading step — a type of damage repair — that enhances the accuracy by a factor of 100 to 1,000. The current record holder for the sloppiest DNA replication of a living organism, under normal conditions, is Mycoplasma mycoides (and its human-modified relative, JVCI-syn 3A), where only 1 in 33,000,000 nucleotides are incorrectly copied.3

Following the replication of DNA, a daily barrage of DNA damage occurs during normal operating conditions. Life therefore requires sophisticated and highly specific DNA repair mechanisms. In humans, DNA damage response is estimated to involve a hierarchical organization of 605 proteins in 109 assemblies.4 Efforts to make the simplest possible cell by stripping out all non-essential genes has successfully reduced DNA repair to a minimal set of six genes.5 But, these six genes are encoded in thousands of base pairs of DNA, and the machinery to transcribe and translate those genes into the repair enzymes requires a minimum of 149 genes.6 Thus, the DNA code that is required to make DNA repair mechanisms easily exceeds 100,000 base pairs. Here, we encounter a great paradox, first identified in 1971 by Manfred Eigen7: DNA repair is essential to maintain DNA but the genes that code for DNA repair could not have evolved unless the repair mechanisms were already present to protect the DNA. 

Faulty Production of New Machinery

We  used to think that the metabolic machinery in a cell always produced perfect products. But the reality is that faulty products are unavoidable, resulting in the production of interfering or toxic garbage. All living organisms must therefore have machinery that identifies problems and either repairs or recycles the faulty products. 

The cell’s central manufacturing machine is the ribosome, a marvel that produces functional proteins from strands of mRNA (with the help of many supporting molecules). Unfortunately, about 2-4 percent of mRNA strands get stuck in the ribosome during translation into a protein.8 Not only does this halt production, but it could result in production of a toxic, half-finished protein. 

If the mitochondria could not get “unstuck,” life as we know it would end. In the process of self-replication, a single cell must produce an entire library of proteins, placing a heavy burden on the cell’s mitochondria. But with a 2-4 percent rate of stuck mRNA strands, the average cell would have each of its mitochondria get stuck at least five times before the cell could replicate.9 Therefore, life could never replicate and metabolism would cease unless this problem was solved.

Fortunately, all forms of life, even the simplest,9 are capable of trans–translation, typically involving a three-step process. First, a molecule combining transfer and messenger RNA and two helper molecules (SymB and EF-Tu) recognizes that mRNA is stuck in the ribosome and attaches a label to the half-formed protein. This label, called a degron, is essentially a polyalanine peptide. The condemned protein is recognized, degraded, and recycled by one of many proteases. Finally, the mRNA must also be labeled and recycled to keep it from clogging other ribosomes. In some bacteria,10 a pyrophosphohydrolase enzyme modifies the end of the mRNA, labeling it for destruction. An RNAse (another enzyme) then recognizes this label, grabs hold of the mRNA, and draws it close to its magnesium ion, which causes cleavage of the RNA. Another RNAse then finishes the job, breaking the mRNA up into single nucleotides which can be re-used.

The required presence of tools that can destroy proteins and RNA also comes with a requirement that those tools are highly selective. If these tools evolved, one would expect the initial versions to be non-selective, destroying any proteins or RNA within reach, extinguishing life and blocking the process of evolution.11

Note that the set of tools for trans-translation and protein and RNA recycling are all stored in DNA, which must be protected by repair mechanisms. And, these tools cannot be produced without mitochondria, but the mitochondria cannot be unstuck without the action of trans-translation. Thus, we encounter another case of circular causality. 

Damage Incurred During Use

The normal operation of enzymes or metabolites like co-enzymes or cofactors involves chemical reactions that follow specific paths. Deviations from the desired paths can occur from interferences like radiation, oxidative stress, or encountering the wrong “promiscuous” enzyme. These deviations result in rogue molecules that interfere with metabolism or are toxic to the cell. As a result, even the simplest forms of life require several metabolic repair mechanisms: 

“[T]here can be little room left to doubt that metabolite damage and the systems that counter it are mainstream metabolic processes that cannot be separated from life itself.”12
“It is increasingly evident that metabolites suffer various kinds of damage, that such damage happens in all organisms and that cells have dedicated systems for damage repair and containment.”13
As a relatively simple example of a required repair mechanism, even the simplest known cell (JVCI Syn 3A) has to deal with a sticky situation involving sulfur. Several metabolic reactions require molecules with a thiol group — sulfur bonded to hydrogen and to an organic molecule. The organism needs to maintain its thiol groups, but they have an annoying tendency to cross link (i.e., two thiol groups create a disulfide bond, fusing the two molecules together). Constant maintenance is required to break up this undesired linking. Even the simplest known cell requires two proteins (TrxB/JCVISYN3A_0819 and TrxA/JCVISYN3A_0065) to restore thiol groups and maintain metabolism.12 Because the repair proteins are themselves a product of the cell’s metabolism, this creates another path of circular causality: You can’t have prolonged metabolism without the repair mechanisms but you can’t make the repair mechanisms without metabolism.

An Ounce of Prevention is Worth a Pound of Cure

In addition to life’s required repair mechanisms, all forms of life include damage prevention mechanisms. These mechanisms can destroy rogue molecules, stabilize molecules that are prone to going rogue, or guide chemical reactions toward less harmful outcomes. As an example, when DNA is replicated, available monomers of the four canonical nucleotides (G, C, T, and A) are incorporated into the new strand. Some of the cell’s normal metabolites, like dUTP (deoxyuridine triphosphate), are similar to a canonical nucleotide and can be erroneously incorporated into DNA. Even the simplest cell (once again, JVCI-syn3A) includes an enzyme (deoxyuridine triphosphate pyrophosphatase) to hydrolyze dUTP and prevent formation of corrupted DNA.6

Summing Up the Evidence

Those who promote unguided abiogenesis simply brush off all of these required mechanisms, claiming that life started as simplified “proto-cells” that didn’t need repair. But there is no evidence that any form of life could persist or replicate without these repair mechanisms. And the presence of the repair mechanisms invokes several examples of circular causality — quite a conundrum for unintelligent, natural processes alone. Belief that simpler “proto-cells” didn’t require repair mechanisms requires blind faith, set against the prevailing scientific evidence.   

Notes

Babenek A, and Zuizia-Graczyk I. Fidelity of DNA replication — a matter of proofreading. Curr Genet. 2018; 54: 985-996.
Some viruses have high error rates when replicating, but viruses cannot replicate without the help of cellular life, which requires very low error rates. Some specialized DNA polymerases intentionally operate with lower fidelity on a temporary basis for purposes such as antibody diversity. 
Moger-Reischer RZ, et al. Evolution of a Minimal Cell. Nature. 2023; 620: 122-127.
Kratz A, et al. A multi-scale map of protein assemblies in the DNA damage response. Cell Systems 2023; 14: 447-463.
Hutchison CA, et al. Design and synthesis of a minimal bacterial genome. Science 2016; 351: aad6253. 
Breuer M, et al. Essential Metabolism for a Minimal Cell. eLife 2019;8:e36842 DOI: 10.7554/eLife.36842. 
Eigen, M. Self-organization of matter and evolution of biological macromolecules. Naturwissenschaften, 1971; 58: 465–523.
Ito K, et al. Nascentome analysis uncovers futile protein synthesis in Escherichia coli. PLoS One 2011; 6: e28413
Keiler KC, Feaga HA. Resolving nonstop translation complexes is a matter of life or death. Journal of Bacteriology 2014; 196: 2123-2130.
Mackie GA. RNase E: at the interface of bacterial RNA processing and decay. Nature Reviews Microbiology 2013; 11: 45-57.
“Because RNA degradation is ubiquitous in all cells, it is clear that it must be carefully controlled to accurately recognize target RNAs.” Houseley J and Tollervey D. The many pathways of RNA degradation. Cell 2009; 136: 763-776.
Hass D, et al. Metabolite damage and damage control in a minimal genome. American Society for Microbiology 2022; 13: 1-16.
Linster CL, et al. Metabolite damage and its repair or pre-emption. Nature Chemical Biology 2013; 9: 72-80.


Tuesday, 25 June 2024

Common design vs. Common descent.

 New Paper Argues that Variant Genetic Codes Are Best Explained by Common Design


A popular argument for a universal common ancestor is the near-universality of the conventional genetic code. Critics of common descent often point to deviations from the standard code as evidence against it. A recent paper published in the journal BIO-Complexity, by Winston Ewert, reviews the character and distribution of genetic code variants and the implications these have for common ancestry, and “develops a framework for understanding codes within a common design framework, based crucially on the premise that some genetic code variants are designed and others are the result of mutations to translation machinery.” Ewert explains that,

Upon first investigation, evolutionary theory appears to have a compelling account of the character and distribution of variant codes. Evolutionary theory suggests that if genetic code evolution is possible it should be very rare. This would explain why most genomes follow the standard code and why the exceptions only vary in a few codons. It would also explain the following details about the variant codes. Most variations are found in mitochondria, whose very small genomes would make code evolution easier. Many variations are also found in highly reduced genomes, such as those of endosymbiotic bacteria. No variations are found in the nuclear genomes of complex multicellular organisms like plants and animals. The distribution of many codes can be easily explained by identifying certain points on the tree of life where codons were reassigned and then inherited by all of their descendants.

EWERT W (2024) ON THE ORIGIN OF THE CODES: THE CHARACTER AND DISTRIBUTION OF VARIANT GENETIC CODES IS BETTER EXPLAINED BY COMMON DESIGN THAN EVOLUTIONARY THEORY. BIO-COMPLEXITY 2024 (1):1-25.

Three Tenets

The paper proposes “a framework that seeks to explain the character and distribution of variant genetic codes within a common design framework.” Ewert’s framework has three tenets: First, “the canonical genetic code has been well optimized and is thus an ideal choice for most genomes.” There are multiple optimized parameters and thus “A designer must identify the best trade-offs to select the ideal genetic code.” The second tenet of Ewert’s framework is that a minor variation on the standard code is better suited to some organisms, since those organisms may acquire an advantage by a different set of trade-offs with respect to genetic code optimization. The third tenet is that the translation machinery has been damaged by mutations in some organisms, and that this has resulted in their misinterpreting the code they were initially designed to employ. These are examples of genetic code variants that have evolved naturally.

Five Criteria

Ewert offers five criteria that may be used to distinguish genetic codes that are evolved from those that are designed. First, evolved codes are expected to be found in taxonomic groups below the family level, whereas those that are designed are predicted to be above the level of family. Second, evolved codes should be readily “explicable in terms of some simple mutation to the translation machinery of the cell.” Third, it is predicted that codes that are evolved will be limited to the genomes of endosymbionts. Fourth, it is expected that codes that are evolved utilize a small number of codons so that the variation does not cause the organism too much harm. Fifth, it is predicted that evolved codes will fall into a simple nested hierarchical (phylogenetic) distribution. By contrast, 

[D]esigned codons are found in high-level taxa of at least genus-level but typically higher. They involve many reassignments that are difficult to explain with any sort of simple mutation. They are found in free-living organisms. They sometimes reassign codons that are expected to be rare. They are often distributed in a complex fashion that does not fit phylogenetic expectations.

EWERT W (2024) ON THE ORIGIN OF THE CODES: THE CHARACTER AND DISTRIBUTION OF VARIANT GENETIC CODES IS BETTER EXPLAINED BY COMMON DESIGN THAN EVOLUTIONARY THEORY. BIO-COMPLEXITY 2024 (1):1-25.

It has been conventionally thought that evolution provides a good explanation for the character and distribution of genetic code variants — in particular, the near-universality of the standard code; the prevalence of variant codes in simple genomes such as those of mitochondria; and the phylogenetic distribution of variant codes. Ewert notes that, in light of evolutionary theory, it would in fact be expected that there would be variant codes found at the higher taxonomic levels, which would be consistent with the genetic code still being variable at the time of the last common ancestor. However, “What we observe instead are modifications of the standard code. They are not associated with the high-level taxa…” 

Furthermore, though we should expect on evolutionary theory that it would be exponentially harder to reassign a code as the number of genes increases, “variant codes are found in nuclear genomes that are not particularly small. They are found in ciliates, which have comparable numbers of genes to the human genome. Additionally, we find them in some multicellular green algae. In fact, we find more code variation in eukaryotic nuclear genomes than in bacterial genomes, despite eukaryotes having much larger genomes.” Thus, Ewert concludes, “despite the initial impression, evolutionary theory does not account well for the kinds of genomes with variant codes.”

Invoking “Inexplicable Events”

Finally, though evolutionary theory would predict that the distribution of variant codes would be consistent with the standard phylogeny, Ewert observes that, “In many cases, the distribution of a code is complex, defying evolutionary explanations. Codes recur in closely related groups in a way not explained by common descent. Evolutionary theory has to invoke inexplicable events such as reversions to the standard code.”

Thus, Ewert concludes, “Initially, evolutionary theory appeared to have some explanatory power. However, upon closer inspection, the features of the variant codes that seemed well explained by evolutionary theory turned out to either be inaccurate or to not follow from evolutionary theory.” Instead, he argues that the character and distribution of variant codes is often better explained under a framework of common design.

The paper is well worth a careful read. It can be accessed here.


Yet another house divided?

 

The future we were promised is finally here?