Search This Blog

Wednesday, 7 July 2021

And still yet more primeval tech vs. Darwin.

 





New Paper Investigates Engineering Design Constraints on the Bacterial Flagellum

Casey Luskin


A new peer-reviewed paper in the journal BIO-Complexity, “An Engineering Perspective on the Bacterial Flagellum: Part 1 — Constructive View,” comes out of the Engineering Research Group and Conference on Engineering in Living Systems that Steve Laufmann recently wrote about. The author, Waldean Schulz, holds a PhD in computer science from Colorado State University, and is a signer of the Scientific Dissent from Darwinism list. What could a computer scientist say about the bacterial flagellum? Well, Schulz explains that his study “examines the bacterial flagellum from an engineering viewpoint,” which aims to concentrate on the “the structure, proteins, control, and assembly of a typical flagellum, which is the organelle imparting motility to common bacteria.” 

This technique of examining biology through the eyes of engineering is not necessarily new — systems biologists have been doing it for years. However, since engineering is a field that tries to determine how to better design technology, the field of intelligent design promises to yield new engineering-based insights into biology. Schulz’s paper is a prime example of such a contribution. It produces what is arguably the most rigorous logical demonstration of the irreducible complexity of the flagellum produced to date. 

A Goal-Directed Approach

Intelligent design is fundamentally a goal-directed approach to studying natural systems, where the various parts and components biological organisms are coordinated to work together in the top-down manner of engineering. Schulz’s paper thus takes a “constructive approach” which requires a “top-down specification.” Here’s how this approach works:

It starts with specifying the purpose of a bacterial motility organelle, the environment of a bacterium, its existing resources, its existing constitution, and its physical limits, all within the relevant aspects of physics and molecular chemistry. From that, the constructive approach derives the logically necessary functional requirements, the constraints, the assembly needs, and the hierarchical relationships within the functionality. The functionality must include a control subsystem, which needs to properly direct the operation of a propulsion subsystem. Those functional requirements and constraints then suggest a few — and only a few — viable implementation schemata for a bacterial propulsion system. The entailed details of one configuration schema are then set forth.

This approach is very similar to Paul Nelson’s ideas about “design triangulation”: you identify some function that is needed, and if the system was intelligently designed then you can back-engineer other components and parts that will be needed for that function to be fulfilled. After all, “engineers regularly specify and design systems top-down, but they construct those systems bottom-up.” Thus an engineering analysis of the flagellum seems the best way to understand it. 

Schulz introduces engineering methodologies to study the flagellum, which flow naturally from of an ID paradigm. He writes:

A common engineering methodology, called the Waterfall Model, first produces a formal Functional Requirements Specification document. Then a design is proposed in a System Design Specification, which must comport with the Requirements Spec. Typically this methodology is often accompanied by a Testing Specification, which measures how well the subsequently constructed system satisfies the requirements. This methodology was and is successfully applied at Intel, Image Guided Technologies, and Stryker. A similar specification method can be used by a patent agent or attorney in helping inventors clarify in detail what they have invented for a patent application.

When applying this method, one examines “overall purpose for the proposed system, the usage environment, necessary functionality, available materials, tools needed for construction, and various parameters and constraints (dimensions, form, cost, materials, energy needs, timing, costs, and other conditions).”  After doing this, “a design is proposed that logically comports with those requirements.”

What Are the Requirements for Bacterial Motility?

Schulz then applies this method to the flagellum, asking What are the requirements for bacterial motility? “In doing this,” he notes, “the constructive approach becomes — in effect — the engineering documentation that must be written as if a clever bioengineer were tasked to devise a motility system for a bacterium lacking a motive organelle.” He answers various questions outlined by the “Waterfall Model”:

  • What is the overall needed function? Answer: “First, the system should enable a bacterium to sense and move toward nutrients needed for metabolic energy, self-repair, and reproduction. Second, the system should enable its bacterium to sense and escape hostile locales, such as toxic or noxious material.”
  • What is the environment of the flagellum? Answer: “The environment of a typical bacterium generally may include both nutrients and deleterious substances. Further, the bacterium typically is suspended within a liquid or semi-fluid medium.”

Thinking Like an Engineer 

Schulz thus determines that the system requires “a propulsion subsystem to accomplish motion.” It also requires “some form of primitive redirection subsystem, working in concert with or integrated with the propulsion subsystem” to help the bacterium find nutrients. There must also be a “collateral control subsystem” which can “sense favorable or unfavorable substances.” He outlines logic controls of this system — including signals that indicate the bacterium should proceed “full speed ahead” or “flee and redirect.” 

The response time for these signals and the motility speeds they induce must also be appropriate to fulfill the needed functions. However, the speeds should be appropriate. For example, “A substantially faster speed would be wasteful of energy,” and “The energy cost to operate the propulsion subsystem must be less than the energy obtained by navigating to and consuming nutrients.” And there are also assembly constraints — including that “The material resources and energy requirements to build a propulsion system must be low enough to justify its construction — that is, to justify the benefit of motion to find new nutrients for metabolism.” 

An Irreducibly Complex System

He notes that all of these resulting requirements present us with an irreducibly complex system: 

[T]he goal is to specify only a minimal set of requirements, assuring that all the requirements of the subsystem are essential. That would imply that the specified sensory-propulsion-redirection system is effectively irreducible. That is, if some part is missing or defective, then, at best, there would be noticeably diminished motility, if any.

Schulz then proposes a design for the bacterial flagellum to fulfill these requirements of flagellar motility. Some of the following requirements must be met:

  • “the propulsion subsystem needs a source of power to operate”
  • “there must be a power-to-motion transducer”
  • “there must be sensors to detect whether the propulsion system should move the bacterium forward … there must be some external member physically interacting with the environmental medium containing the bacterium.”

There must also be various assembly requirements. Construction materials could include a variety of potential biomolecules, including sugars, RNA, DNA, nucleotide bases, or proteins. The answer is clear: “an obvious generic requirement of using available fabrication tools, templates, and control effectively rules out the use of other materials, such as sugars or non-protein polymers.” 

There are a variety of potential basic designs for propulsion. One is a jet-like nozzle (which would require a bladder and many more parts). Another is a rhythmic flexing (which would require a long, flexible body and much more). Still another is a leg-like appendage (which would require appendages), or a snake-like caterpillar crawl (again requiring a long flexible body), or a helical propeller. Schulz explores what would be necessary for a helical propeller — the actual design of bacterial flagella. He discusses various needed parts — including “an armature or mounting structure, a motor rotor, a drive shaft of appropriate length, a helical propeller, and possibly adaptors to bind those components together.” Further, he notes, “there need to be bearings and seals between the rotary components and static components.” Here’s a larger description of the needs:

The static subassembly requires the following components: the semi-rigid cell membrane(s) for rigid mounting, a motor stator, multiple sealed bearings where the rotary subassembly penetrates cell membranes, and an energy conduction pathway.

The stator together with the motor rotor produces torque. The stator must be rigidly attached to some or all of the bacterium’s inner and outer membranes and the peptidoglycan layer. The rigid attachment transfers necessary counter-torque to the cell body as well as providing stability for the rotary subassembly. For each membrane or layer the drive shaft penetrates, there must be a bearing. Each bearing must (a) stabilize the drive shaft, (b) provide a low-friction contact with the drive shaft, and (c) provide a seal to prevent movement of molecules past where the shaft penetrates its host membrane or layer.

A Dependency Network 

Schulz finally develops a dependency network for these requirement showing their “interdependency relationships” which addresses all of the above constraints, including the “purpose, environment, required functions, constraints, and the logically implied static, structural requirements.” It’s quite a detailed diagram — here it is — it’s a bit large so click here for a full-resolution version:

In an impressive table, Schulz lists all of the different components and properties of the flagellum and the rationale for their inclusion. He notes that although there are a couple of different ways to build the system, “In either case, the specified bacterial motility system would be irreducibly complex” and that the “intricate coherence” of all of the parts, systems, and design requirements of the flagellum “is essentially irreducible.” He concludes:

Current evolutionary biology proposes that the flagellum could have been “engineered” naturalistically by cumulative mutations, by horizontal gene transfer, by gene duplication, by co-option of existing organelles, by self-organization, or by some combination thereof. See the summary and references by Finn Pond. Yet to date, no scenario in substantive detail exists for how such an intricate propulsion system could have evolved naturalistically piece by piece. Can any partial implementation of a motility system be even slightly advantageous to a bacterium? Examples of a partial system might lack sensors, lack decision logic, lack control messages, lack a rotor or stator, lack sealed bearings, lack a rod, lack a propeller, or lack redirection means. Would such partial systems be preserved long enough for additional cooperating components to evolve?

That is the key question — which will be explored in future papers that Schulz aims to publish. Based upon the “intricate coherence” and “irreducible complexity” of the numerous parts and properties of the flagellum, the answers to these questions would seem to be no

Tuesday, 6 July 2021

It's Darwinism all the way down?

 Many remember the joke about the best way to open a can: “Assume a can opener.” Or the joke about what holds up the Earth, if the Earth is supported by a turtle: “It’s turtles all the way down.” We laugh at these vacuous explanations, but is not Darwinism like that? It’s a catch-all explanation for everything. Just assume it, and it will explain any data. Darwin may have defined it in terms of the origin of species, but today, natural selection is the Swiss Army knife applied in widely divergent fields. It can be used as a can opener, corkscrew, scraper, screwdriver, and even a dagger for defending itself against critics. Simply assume this can opener and the explanatory work is done. If not, there are more Swiss Army knives all the way down.

Serious papers have used natural selection to explain bacterial antibiotic resistance, human politics, and the multiverse. Any phenomenon that undergoes change but survives seems fair game for bringing out Darwin’s all-purpose explanatory pocket tool. Put another way, it’s like a demon. Maxwell’s demon was a thought experiment about a possible way to violate a natural law; even today, physicists argue about ways to test it. Natural selection is another occult force, complete with mystical “selection pressures” that can create eyes and wings by chance. This demon, too, makes possible violations of a natural law: the law of cause and effect. Natural selection could be called “Darwin’s demon” or, as the demon likes to portray itself, Darwin’s genie. It will fulfill its master’s every wish.

Historical Blunders

Critics of the Origin of Species immediately pounced on Darwin’s fallacious analogy of selective breeding with his new notion of natural selection. The former is done by people with minds acting with foresight toward a goal, they pointed out; the latter is supposed to be blind and unguided. Nevertheless, Darwin’s disciples ever since have played fast and loose with natural selection, applying it in situations where it doesn’t belong, without regard to any human intelligence involved. A recent example appeared a PNAS special issue about economics. In their introductory article to the series, Simon A. Levin and Andrew W. Lo praise Darwin as they repeat his blunder of flawed analogical reasoning.

We motivate this ambitious initiative with an analogyThe brilliant evolutionary insights of Darwin and others have revolutionized our understanding of the world. Darwin was impressed by the “tangled bank” of elaborate forms that emerged from the undirected processes of evolution to produce the complexity of the biological world. Through continuous innovation coupled with the deceptively simple filter known as natural selection, the characteristics of species and their interactions change in response to changing environments. However, evolution is not limited only to the biological world. Wherever the evolutionary forces of reproduction, variation, and selection exist — as they do in financial markets — evolutionary consequences will follow.[Emphasis added.]

Under the Bus

Never mind the traders, innovators, and theorists in the science of economics. They have been thrown under the bus. It’s natural selection all the way down. Intelligent choice by skilled people with free will responding as wisely as possible to rapidly changing market conditions is old hat. Entrepreneurship is gone. Economic theory by tenured professors like Thomas Sowell is gone. Everything now is Darwin’s demon at work, bringing enlightenment about the true nature of economics. People are just pawns of selection pressures. Economics is now like rafts in rapids without pilots. The luckiest will survive, and the genie will smile at an explanatory job well done.

Evolution is about short-term, relative optimality with respect to other participants in the system. In the biosphere, natural selection acts to improve reproductive success relative to the benchmark of other genomes, within and across species. Evolutionary change can thus be thought of in terms of differential fitness: that is, small differences in reproductive rates between individuals over time leading to large differences in populations.Even the very mechanisms of evolution — including those that generate new variation — are subject to constant modification. In the financial world, the evolutionary forces of mutation, recombination, reproduction, and selection often work on financial institutions and market participants through direct competition, finance “red in tooth and claw.” Financial concepts and strategies thus reproduce themselves through cultural transmission and adoption based on their success in the marketplace. These strategies undergo variation through financial innovation, analogous to mutation or genetic recombination in a biological system, but take place at the level of information and abstract thought in financial contexts. It is “survival of the richest.”

If evolution itself evolves, one doesn’t need people in this picture. One only needs “evolutionary forces” pushing objects around, be they molecules, cells, organisms, men or universes.

No Intelligence Allowed

The other papers in the series repeat the error. Their authors conjure up Darwin’s genie to create the appearance of scientific explanation for human endeavors.

In “Sunsetting as an adaptive strategy,” Roberta Romano and Simon Levin liken corporate decisions to discontinue products to apoptosis (programmed cell death). “Apoptosis, death, and extinction are part of a spectrum of responses but are essential features of the evolutionary play,” they explain gleefully as they discuss boardroom banter.

In “The landscape of innovation in bacteria, battleships, and beyond,” Burnham and Travisano compare Lenski’s Long-Term Evolution Experiment (LTEE) with naval warfare. “The message from naval warfare and the LTEE is that competition fosters innovation,” they say with liberal applications of “selection” from the genie. No admirals allowed.

No Authors Allowed

In “How quantifying the shape of stories predicts their success,” Toubia, Berger, and Eliashberg justify that Darwinian bad habit, just-so storytelling. “Why are some narratives (e.g., movies) or other texts (e.g., academic papers) more successful than others?” they begin. Once again, it’s due to a “selection mechanism” acting silently behind the scenes. They fail to see what this does to their own hypothesis.

In “Social finance as cultural evolution, transmission bias, and market dynamics,” Akçay and Hirshleifer continue the game with Darwin’s genie. “In this paradigm, social transmission biases determine the evolution of financial traits in the investor population,” they say. “It considers an enriched set of cultural traits, both selection on traits and mutation pressure, and market equilibrium at different frequencies.”

In “Moonshots, investment booms, and selection bias in the transmission of cultural traits,” Hirshleifer joins Plotkin to apply natural selection to risk-taking in business. For once, they introduce cognitive reasoning into the mix:

We view adoption or rejection of the risky project as a cultural trait transmitted between firms. We employ the Price Equation to decompose this trait’s evolution into a component due to natural selection and a component due to mutation. Surprisingly, despite the central role of selection bias in the evolution of project choice in the model, the predominant source of cultural change in our context is not natural selection, but, rather, mutation pressure. The importance of mutation during transmission differs sharply from cultural evolutionary models with biased imitation, in which there is only natural selection. This feature of our analysis highlights the role of cognitive reasoning in the cultural evolution of risk-taking behaviors.

Cognitive reasoning cannot overcome the power of Darwin’s genie, however. “The Price Equation decomposes evolutionary change into selection and nonselection effects,” they explain. “The nonselection component is often called mutation pressure — the degree to which traits shift through the inheritance process instead of fitness-biased biased replication.” Thus, cognitive reasoning degenerates into a form of mutation pressure. Does that happen in the process of writing scientific papers, too?

In “Evolved attitudes to risk and the demand for equity,” Robson and Orr continue the use of terms natural selection, fitness, and survival to financial planning. Risk-taking and choice by real people with minds and values is the same kind of thing as the foraging strategies of cattle.

Design Advocates Beware

A theory this plastic makes any debate about Darwinism all but impossible to win. Tackle the genie here, and he will reappear over there. He can always outsmart the debater by shape-shifting into another form. Natural selection is a meaningless concept if the professors over in the Economics building are like evolving bacteria in a long-term evolution experiment. 

These papers give an appearance of erudition through an illusion of mathematical rigor (e.g., Robson and Orr speak of “Convex–Concave Ψ in Biology and Economics”) but what does natural selection really do to scientific explanation? If all human choice and action reduce to selection pressures acting on mindless objects, the intellectual world implodes. Even the writing of scientific papers about “evolutionary models of financial markets” becomes nothing more than a survival strategy.

The Last Laugh 

In his essay The Abolition of Man, C. S. Lewis warned that scientism is dehumanizing to science itself. The Darwinists are today’s Conditioners teaching the populace about the true nature of things. They view themselves as victors in the conquest of Nature, “explaining away” and “seeing through” human values, which are now “mere natural phenomena” like natural selection. 

This is not a victory, Lewis says, but a defeat. It is not conquering medieval magic, but embracing it. Lewis does not suppose that the Conditioners are bad men; “They are, rather, not men (in the old sense) at all. They are, if you like, men who have sacrificed their own share in traditional humanity in order to devote themselves to the task of deciding what ‘Humanity’ shall henceforth mean.” 

The last laugh is for Darwin’s demon. He tricked them. He manifested himself as a genie of explanation. He promised to bring them enlightenment, the ability to see through the outward appearance of things to their true natures. He promised to explain away human values in natural terms.

But you cannot go on ‘explaining away’ forever: you will find that you have explained explanation itself away. You cannot go on ‘seeing through’ things for ever. The whole point of seeing through something is to see something through it. It is good that the window should be transparent, because the street or garden beyond it is opaque. How if you saw through the garden too? It is no use trying to ‘see through’ first principles. If you see through everything, then everything is transparent. But a wholly transparent world is an invisible world. To ‘see through’ all things is the same as not to see.

Sunday, 4 July 2021

On origins and loaded dice.

 

Bernoulli, Keynes, and the Big Bang

Robert J. Marks II

Jacob Bernoulli made a now obvious observation about probability over three-and-a-half centuries ago: If nothing is known about the outcome of a random event, all outcomes can be assumed to be equally probable. Bernoulli’s Principle of Insufficient Reason (PrOIR) is commonly used. Throw a fair die. There are six outcomes, one for each face of the cube. The chance of getting five pips showing on the roll of a die is therefore one sixth. If a million lottery tickets are sold and you buy one ticket, the chances of winning are one in a million. This reasoning is intuitively obvious. 

If the Die Is Loaded

The assumption about the die is wrong if the die is loaded. But you don’t know that. You know nothing. So Bernoulli’s PrOIR provides the best model based on the known. If the lottery is fixed and you’re not in on the fix, your chances of winning will be less that one in a million. Maybe zero. But you don’t know the game is fixed. You know and assume nothing. Under the circumstances, equal probability is the best assumption you can make.

In analysis of fine-tuning, No Free Lunch Theorems, and conservation of information, Bernoulli’s PrOIR is foundational. In thermodynamics, uniform distributions correspond to maximum entropy. In the absence of air currents or thermal gradients, the temperature is the same in the middle of the room as it is in the corners.  

Those who disagree with Bernoulli’s PrOIR consistently misapply the principle. They don’t appreciate the definition of “knowing nothing.” The concept of “knowing nothing” can be tricky. The sentences “knowing nothing means knowing something” and “knowing nothing means knowing nothing” are both curious puns.

Strange Ideas in Economics

The most visible opposition of Bernoulli’s PrIOR comes from the economist John Maynard Keynes who is most famous for some strange ideas in Keynesian economics. Keynes’ problems with Bernoulli’s assumption are discussed in his book A Treatise on ProbabilityTwo of his objections, Bertand’s Paradox and the distribution of reciprocals, are soundly debunked in Introduction to Evolutionary Informatics by Ewert, Dembski and me.

A third argument made by Keynes stems from the sort of data economists would deal with. Here is the example: Consider presenting a man who is either from Great Britain or France. You know nothing about the selection process. Bernoulli then says the chances of the man being French is one half. Consider a second situation where locations are finer grained. You are told the visitor is either from Scotland, Wales, or France. Is the chance the man is French now one third? Since both Scotland and Wales are part of Great Britain, what does this say about the first answer where the chance of being French is one half? Is this a case where Bernoulli’s PrOIR breaks down?

No. In reaching this contradiction, Keynes knew something. He did not know nothing” as required by Bernoulli’s PrOIR.

Read the rest at Mind Matters News, published by Discovery Institute’s Walter Bradley Center for Natural and Artificial Intelligence.

Saturday, 3 July 2021

Time to start calling technology technology?

 

Recasting Darwin Stories into Engineering Models

Evolution News 

Not all change is “evolution” in the Darwinian sense. Darwin theorized that every change was the result of unguided variations somehow “selected” by the environment for reproductive success and survival. But what if organisms were engineered to survive in changing environments? What if a designer had the foresight to install mechanisms in the genetic code that would switch on under stressful circumstances? Stickleback fish offer an opportunity to test those alternatives.

The three-spine stickleback has been Michael Bell’s evolutionary pet since he retired from Stony Brook University. News from U.C. Berkeley tells how he became intrigued by these 2.5” fish that swim up Alaskan streams to spawn. They are his version of Darwin’s finches, “evolving” in short enough timeframes to shed light on the mechanisms of adaptation. They have lately been among evolutionists’ favorite icons demonstrating the truth of Darwinian evolution.

Michael Bell, currently a research associate in the University of California Museum of Paleontology at UC Berkeley, stumbled across one such natural experiment in 1990 in Alaska, and ever since has been studying the physical changes these fish undergo as they evolve and the genetic basis for these changes. He has even created his own experiments, seeding three Alaskan lakes with oceanic sticklebacks in 2009, 2011 and 2019 in order to track their evolution from oceanic fish to freshwater lake fish. This process appears to occur within decades — very unlike the slow evolution that Charles Darwin imagined — providing scientists a unique opportunity to actually observe vertebrate adaptation in nature. [Emphasis added.]

Writers at Evolution News have commented on stickleback “evolution” for years, arguing that the changes are microevolutionary at best, simply oscillating back and forth with no net fitness gains. The CELS event last month, though, provided an opportunity to look at the empirical data from an engineering perspective. Were these marine fish equipped with mechanisms to adapt when trapped in freshwater lakes, finding themselves surrounded by different ecological conditions? 

Puzzling Observations for Darwinists 

Before analyzing the scientific paper, note that the news mentions some observations that Darwinian biologists should find puzzling. For one, the “evolution” was very rapid: within a decade or less, the offspring of the trapped fish had adjusted to their new surroundings. For another, similar genetic changes were found in populations that had “evolved” independently. Additionally, the code for adaptation seems to be imbedded in the fish before they adapt.

The title of the paper in Science Advances, by Garrett A. Roberts Kingman et al., also looks curiously out of sync with traditional Darwinism: “Predicting future from past: The genomic basis of recurrent and rapid stickleback evolution.” Isn’t Darwinian evolution unguided and therefore unpredictable? Eighteen authors, besides Michael Bell, hailing from 11 institutions in 8 states and one from Germany, participated in this heroic attempt to document evolution and to elevate stickleback fish to the iconic stature of Darwin’s finches. Those birds, in fact, figure prominently in the paper. The team believes that their findings will help explain the adaptive success of Darwin’s finches and other species that show rapid adaptation to a changed environment.

Similar forms often evolve repeatedly in nature, raising long-standing questions about the underlying mechanisms. Here, we use repeated evolution in stickleback to identify a large set of genomic loci that change recurrently during colonization of freshwater habitats by marine fish. The same loci used repeatedly in extant populations also show rapid allele frequency changes when new freshwater populations are experimentally established from marine ancestors. Marked genotypic and phenotypic changes arise within 5 years, facilitated by standing genetic variation and linkage between adaptive regions. Both the speed and location of changes can be predicted using empirical observations of recurrence in natural populations or fundamental genomic features like allelic age, recombination rates, density of divergent loci, and overlap with mapped traits. A composite model trained on these stickleback features can also predict the location of key evolutionary loci in Darwin’s finches, suggesting that similar features are important for evolution across diverse taxa.

Standing Genetic Variations 

A key element of the new model is Standing Genetic Variations (SGV), mentioned a dozen times in the paper. As opposed to de novo mutations, which arise randomly over time in traditional neo-Darwinism, standing genetic variations are already present within a population. Moreover, these “ancient adaptive alleles” can be linked to other alleles in what they call EcoPeaks that confer adaptive success to the organism. Is this beginning to sound more like internal programming indicative of foresight? Perhaps that is why there is no operative mention of Darwinian evolution, neo-Darwinism or random variation/mutation in the paper. It’s not that the authors disbelieve or discredit old neo-Darwinism. They just find a short-term process that is observable and predictable:

Although the predictability of evolution may appear to be in conflict with the unpredictability of historical contingency, understanding the past can yield important insights into future evolution. For example, vertebrate populations frequently harbor large reservoirs of standing genetic variation (SGV) that give independent populations access to similar raw genetic material to respond to environmental challenges, as observed in diverse species including songbirds, cichlid fishes, and the threespine stickleback (Gasterosteus aculeatus). SGV is often apparent in divergent species or populations where it is pretested by natural selection and then distributed by hybridization to related populations. Thus filtered and capable of leaping up fitness landscapes, SGV can also drive rapid evolution, helping address a very real practical challenge to testing evolutionary predictions: time.

Aha! This Is Rich

They basically say, “We can’t watch natural selection work in real time, but we can observe mutations that were pre-selected to leap up fitness peaks. Whether in species of fish or birds, individuals can just borrow the pre-adapted alleles by hybridization and get through hard times. See? Evolution is predictable after all!” This is how dogmatic Darwinists can have their cake and eat it, too. Mutations are still random, but they occurred in the invisible past. What we have now are pools of pre-selected genes able to help organisms evolve quickly and predictably. Evolution is still a fact!

Stickleback fish provide an outstanding system for further study of the genomic basis of recurrent evolution. At the end of the last Ice Age, threespine stickleback, including anadromous populations that migrate from the ocean to freshwater environments to breed, colonized and adapted to countless newly exposed freshwater environments created in the wake of retreating glaciers around the northern hemisphere. This massively parallel adaptive radiation was facilitated by natural selection acting on extensive ancient SGV. Under the “transporter” hypothesis, these variants are maintained at low frequencies in the marine populations by low levels of gene flow from freshwater populations. Reuse of ancient standing variants has enabled identification of genomewide sets of loci that are repeatedly differentiated among long-established stickleback populations. In addition, SGV enables new freshwater stickleback populations to evolve markedly within decades, including conspicuous phenotypic changes in armor plates and body shape.

What if those adaptive alleles instead were engineered? A designing intelligence would have the foresight to provide organisms with a toolkit for adapting to changed environments. If so, one would expect organisms to already possess the tools (standing genetic variation) or a means to get them (hybridization). One would expect populations to adapt quickly and independently, not gradually. Consequently, the fossil record would be characterized by systematic gaps. Which model fits the evidence?

Pretested Adaptive Information

Evolutionists have been complaining about gaps in the fossil record long before Stephen Jay Gould spoke of them as the “trade secret of paleontology.” The gaps were explained away by punctuated equilibria and other rescue devices, arguing that evolution occurred too fast to leave fossils but too slow to observe. Well, these 19 authors are now saying that adaptation can be observed, but what happens is not natural selection of random mutations. It’s genetic sharing of pretested adaptive information. That is why Darwin’s finches quickly adapt to droughts and availability of food sources. That is why stickleback fish can gain and lose armor, depending on the predation ecology. The authors insist that their model improves old evolutionary theory:

The importance of SGV for evolution is becoming increasingly apparent, especially in species with large genome sizes, including humans. At first glance, the dependence of threespine stickleback on SGV for freshwater adaptation may appear to be a peculiarity in terms of repeatability and speed and their particular natural history. However, by more comprehensively understanding the dynamics of this highly optimized process, we have extracted general features of genome architecture and evolution that successfully translate to species on distant branches of the tree of life, thus demonstrating the tremendous power of the stickleback system to identify unifying principles that underlie evolutionary change.

But if this is a “highly optimized process” around the tree of life (or, better, network of life), how is it Darwinian? The paper says precious little about the fitness, survival, and speciation — terms that used to be centerpieces of evolutionary theory. The idea of progressive evolution is also merely assumed, not demonstrated:

This suggests that individual regions may grow over time, with alleles originally based on an initial beneficial mutation accumulating additional linked favorable mutations, snowballing over time to form a finely tuned haplotype with multiple adaptive changes. This is consistent with work in other species identifying examples of evolution through multiple linked mutations that together modify function of a gene (50–52) and implies that progressive allelic improvement may be common.

Their three examples in the references, however, only refer to regulatory effects on existing genes — not the origin of species that Darwin wished to explain. Their new model actually sounds designed: organisms can borrow existing know-how supplied to them in a vast library of SGV.

No Need for Excuses

Today’s engineering-conversant biologists have no need of the old excuses for rescuing neo-Darwinism’s gradualism, which contradicts the fossil evidence. Adaptive alleles can be viewed not as a haphazard pool of pre-filtered random mistakes that just happen to work. They are sets of tools for surviving in a dynamic world. This new paper, which does not provide any evidence for randomness or gradualism, proposes a distributed network strategy that looks like good design. Just as each car does not need to carry every tool if it can be obtained from a warehouse, each organism does not need to carry all possible adaptive alleles if it can obtain what it needs from the population’s library. That’s a design strategy that engineering-aware biologists may wish to develop, using this paper (sans its neo-Darwinist assumptions) as evidence. 

Thursday, 1 July 2021

More light less heat on exoplanets.

 


Study: Planets Capable of Sustaining Photosynthesis Are Extremely Rare

Casey Luskin

Headlines currently buzzing around the Internet are saying things like “Earth-like worlds capable of sustaining life may be less common than we thought” (CNET) or “There Is Only One Other Planet In Our Galaxy That Could Be Earth-Like, Say Scientists” (Forbes). While some might find it encouraging that there’s one other Earth-like planet in our galaxy, when you consider that there are 100 billion stars in the Milky Way galaxy alone, this certainly makes it sound like habitable planets are pretty special. 

The claims are based upon a new study in Monthly Notices of the Royal Astronomical Society, “Efficiency of the oxygenic photosynthesis on Earth-like planets in the habitable zone.” Photosynthesis, of course, is the basis of the biosphere for life on Earth, as the paper explains: 

Photosynthesis is the dominant process, as it allows us to produce about 99 per cent of the entire biomass of the Earth biosphere. OP is also essential for providing abundant O2 levels which appear to be necessary for the high-energy demands of multicellular life anywhere in the Universe.” A planet that can sustain photosynthesis thus has the ability to sustain a wide variety of other life forms. The study aims to estimate the ability of a planet to sustain oxygenic photosynthesis (OP) given three parameters: (1) the photon flux (i.e., the amount of light), (2) the “exergy,” which is a measure of the amount of work that can be done given the radiation input, and (3) something that can only be put in the words of the authors: “the exergetic efficiency of the radiation in the wavelength range useful for the oxygenic photosynthesis as a function of the host star effective temperature and planet-star separation.

Earth Has Highest Exergetic Efficiency

Let’s cut to the chase: The paper finds that among a database of planets both inside and outside our solar system, Earth is far and away the planet that is best-suited for life, with only one other planet with a radiation input that could possibly sustain oxygenic photosynthesis:

Earth is … the rocky planet with the largest PAR photon flux and with the highest exergetic efficiency. However, we also find that Kepler-442b receives a PAR photon flux slightly larger than the one necessary to sustain a large biosphere, similar to the Earth biosphere. So, it is likely that a Kepler-442b biosphere would not be light-limited.

Now of course the amount and type of radiation reaching a planet is crucial for its habitability. But there is a whole suite of additional features that are needed for advanced life to exist. This includes the presence of water (as well as the proper distance from the host star for the water to be in a liquid form), availability of necessary elements such as hydrogen, carbon, oxygen, and nitrogen, and the proper balance of other compounds such as carbon dioxide. To give a more complete list, planetary habitability requirements seem to include:

  • Rocky planet with active plate tectonics to recycle elements needed for life
  • Presence of sufficient water in the crust
  • Large moon with right rotation period and distance
  • Right planetary mass
  • Presence of magnetic field
  • Location within circumstellar habitable zone which allows liquid water to exist
  • Low-eccentricity orbit to allow for stable climate
  • Presence of large Jupiter-mass planetary neighbors in large circular orbits 
  • Location outside spiral arm of galaxy and far enough from center of galaxy – the Galactic Habitable Zone
  • Near co-rotation circle of galaxy, in circular orbit around galactic center
  • Stable radiation output of host star
  • Atmosphere which can allow visible light to penetrate to surface yet block out harmful radiation

We know that Earth meets all of these requirements, but does Kepler-442b? At this point we simply don’t know. 

A Planetary System Fit for Life

In his 2018 book Children of Light, Michael Denton elaborates on the last item in the list above — special properties of Earth’s atmosphere which allow radiation needed for “light-eating” organisms to reach the surface yet block out forms of radiation which are destructive to organic molecules. He explains that the electromagnetic radiation emitted by our sun is especially suited to the needs of life, and the atmosphere of Earth allows the precise wavelengths of radiation that are needed for photosynthesis:

[T]he electromagnetic radiation emitted by the Sun (and that of most other stars) is almost entirely light and heat (or infrared), which have precisely the characteristics needed for life, especially advanced life, to thrive on the Earth’s surface. Light is required for photosynthesis and heat is required to raise the Earth’s temperature to well above freezing and preserve liquid water on Earth.

It is only because of the precise absorption characteristics of Earth’s atmospheric gases that most of the light radiation emitted by the Sun reaches the Earth’s surface where it drives the chemical process of photosynthesis upon which we “light eaters” ultimately depend. And the same atmospheric gases which let the light through for photosynthesis absorb a portion of the infrared (IR) radiation, which warms the Earth and preserves water as a liquid on the Earth’s surface. Adding to the miracle, both the atmospheric gases and liquid water, the matrix of carbon-based life, not only let through the right light but strongly absorb all the dangerous types of radiant energy on either side of the visual and infrared regions of the electromagnetic spectrum, a vital property without which no advanced life forms would grace the surface of the Earth. 

CHILDREN OF LIGHT, PP. 15-16

Not only is Earth’s atmosphere precisely suited for the forms of EM radiation needed for life, but Denton explains that there’s a coincidence of chemistry wherein the very wavelengths of light that pass through our atmosphere can activate organic molecules yet not destroy them: 

Within this Goldilocks region, the light is not so energetic as to cause chemical disruption of organic matter, but it is energetic enough to gently activate organic molecules for chemical reaction. In other words “just right.” No other EM radiation will do! As Wald points out, it is not that life adapted to the right light but that the right light is the only light that provides the correct energy levels for photochemistry

“There cannot be a planet on which photosynthesis or vision occurs in the far infrared or far ultraviolet, because these radiations are not appropriate to perform these functions. It is not the range of available radiation that sets the photobiological domain, but rather the availability of the proper range of wavelengths that decides whether living organisms can develop and light can act upon them in useful ways.”

CHILDREN OF LIGHT, PP. 25, EMPHASIS IN ORIGINAL

Both Earth and our sun form a finely balanced system that is finely tuned to allow photosynthesis to occur. We don’t know that this overall system exists on Kepler-442b. 

Underestimating the Complexity of Photosynthesis

Have you ever heard of those “ghost malls” in China? They were fully built and ready for business — including one that was arguably the largest mall in the world. Everything was just right for business, but they were missing one thing: vendors and customers. In a similar way, having the right kind of sun and planetary atmosphere aren’t the only requirements for photosynthesis. You could have a planet that is perfectly habitable for life yet if life never arises it will be empty. Organisms capable of photosynthesis itself must somehow arise. But how? Reflecting a common form of evolutionary thinking, the new paper discussed above seems to suggest that once the right conditions are present, photosynthesis can evolve quite easily. In one passage it describes the familiar basic chemical equation for photosynthesis and then lauds its “overall simplicity”:

6CO2 + 6H2O + light → C6H12O6 + 6O2 (1)

We conjecture that the chemical reaction (1) should be quite common in the cosmos because of the generally large amounts of radiation received by exoplanets from their host stars, the availability of the input ingredients, and its overall simplicity supported by the fact that OP evolved very early on Earth.

As the paper suggests, photosynthesis is the conversion of light into chemical energy. This is a process that happens in the leaves of plants and requires five highly specialized protein complexes (photosystems I & II, cytochrome bf complex, NADPH reductase, and ATP synthase) for the light reactions and 11 enzymes for the dark reactions. Briefly, in the light reactions, a photon is captured by an antenna pigment, and the energy of excited electrons is then transferred to chlorophyll molecules. A high-energy electron from chlorophyll is passed through an electron transport chain which causes the pumping of protons across a membrane. Those protons are used to power the ATP synthase molecular machine which generates ATP, the energy molecule of the cell.  And of course ATP synthase alone is a multicomponent irreducibly complex molecular machine:


The dark reactions of photosynthesis are where you fix carbon by taking in CO2 and make a carbohydrate which the organism can further use. This whole process is called the Calvin cycle and over the series of 11 enzyme-controlled steps you have a tightly controlled process that generates either energy molecules or structural molecules or both depending on what the cell needs.

Michael Denton provides his own nice sketch of the complexity of photosynthesis:

The primary event on which the whole process of photosynthesis depends is the capture or absorption of photons of light by the photosynthetic pigments (chiefly the green pigment chlorophyll) in the thylakoid membranes (which surround the so-called thylakoid discs in the chloroplast). When the chlorophyll molecules situated in these membranes capture photons, the energy imparted activates electrons in the chlorophyll, raising them to higher energy levels. (Each photon absorbed raises one electron to a higher energy level.) 

This allows the electrons to escape from the chlorophyll, leaving the chlorophyll molecules positively charged or oxidized. (The loss of electrons is oxidation.) The positively charged chlorophylls draw electrons from water molecules (H2O) in the oxygen-evolving complex (OEC), oxidizing them and releasing at the same time free oxygen (0) molecules, as well as protons (H+) and electrons (e-).

Water [H20] → Oxygen [02] + protons [H+] + electrons [e-]

The energetic electrons escaping from the chlorophyll find their way to electron transport chains, where they flow “down” in discrete steps, releasing energy at each step, which is used to do work, pumping protons (H+) across a membrane (the thylakoid membrane) into the thylakoid lumen (a membrane-enclosed compartment in the chloroplast). These then flow back through the same membrane, providing energy to drive the synthesis of ATP (the cell’s chemical energy currency) by the enzyme ATP synthase. 

[…]

Overall, photosynthesis can be seen to occur in two stages. In the first stage, light-dependent reactions capture the energy of light and use it to make the energy-storage molecule A TP and the reducing agent NADPH. These light-dependent reactions occur in the thylakoid membranes. During the second stage (the Calvin cycle), the light-independent reactions use these products to reduce carbon dioxide. 

CHILDREN OF LIGHT, PP. 75-77

Of course each of these steps requires finely tuned enzymes, cofactors, and other biomolecules which facilitate the requisite chemical reactions. It may even represent an irreducibly complex system. 

So how did the paper determine that photosynthesis has an “overall simplicity,” despite the complexity just described? Only due, again, to evolutionary thinking: photosynthesis appears early in life’s history, and because they presume that unguided evolution is the only mechanism by which complex biological systems can arise, they therefore conclude that photosynthesis must be simple and easy to evolve. But as we have seen it’s not simple at all. Having the right conditions for photosynthesis to take place doesn’t in any way guarantee that photosynthesis will evolve. Compared to evolving the complexity of photosynthesis, obtaining the rare special conditions where a planet receives the EM radiation needed for photosynthesis seems like a much simpler task — even though it’s apparently very rare in the universe!