Search This Blog

Thursday, 6 July 2023

The physics of design?

 Physics, Information Loss, and Intelligent Design


In an earlier article, I showed that information ratchets do not exist in nature. The most that any mechanistic system can do is to reproduce the information already available within the system. Printing presses reproduce the typeset information placed in the mechanism by human operators. ChatGPT simply accesses and rearranges information originated by humans and uploaded on the Internet. No new information is produced in either case.

In a recent article, I introduced the physical concept of the generalized second law of thermodynamics, as a governing principle consistent with the Law of Conservation of Information, which William Dembski formulated with the claim that natural causes cannot increase complex specified information in a closed system over time. Here, I’ll seek to provide an explanation of the physics behind the generalized second law — a rationale for why natural processes destroy information.

A Starting Point

First, let’s consider something that may be more easily visualized than information flow. Imagine a system where heat flows from a hot region to a cold region under the constraint of the traditional second law of thermodynamics. The Clausius statement of the traditional second law of thermodynamics tells us that nature works in such a way that heat never flows the other way around. When I add cold cream to my hot coffee, heat flows from the coffee into the cream, until the mixture comes to an equilibrium temperature. The one-way flow of heat is irreversible by natural causes, and the reason is based in the physics of how nature works.

We say an object is “hot” when the average kinetic energy of its component atoms is high. My hot coffee has molecules with higher kinetic energy, on average, than the molecules of the cold cream. When they are mixed in the cup, collisions between atoms occur. First-year physics students will be familiar with a problem asking for the final velocities of two colliding objects, in terms of their initial velocities. For head-on elastic collisions, using the laws of conservation of energy and momentum, the result is always that the slower object ends up moving faster and the faster object ends up moving slower. Cream added to hot coffee unavoidably gives a mixture in which the coffee has lost heat to the cream.

What about the less physical concept of information? How can we physically explain the relentless loss of information by natural processes? Information seems to be a nonphysical concept, but in our universe, it is stored in specific arrangements of physical states of matter. An intelligent mind can recognize specific arrangements of matter (such as molecules of ink that form letters on a page) that convey a meaningful message. In a different context, biochemists can recognize particular sequences of nucleotide bases in a genome that code for a functional protein. 

Linking Information and Observer

All the information that can be known by an observer about a system of any kind is contained within the quantum mechanical wavefunction of the system. My apologies for bringing up quantum mechanics, but its relevance here is that it serves as the link between the information of a system (anything from a single atom to a complex biomolecule to a macroscopic object) and an observer. Unless the wave function of the system is completely isolated from any environmental influence, it will suffer decoherence (loss of information) with the passage of time. In one sense, the wavefunction spreads out into the environment, meaning that the observer will have greater and greater uncertainty as to the state of the system as time goes on. The physical interaction of atoms or photons uncompromisingly causes this effect, with its resulting loss of information.

Some might argue that “luck” could result in an opposite outcome, with interactions causing an increase in information (in biochemistry, this would correlate with increased functional complexity). Why couldn’t this happen? Simply because there are always more ways to go wrong than to go right, when considering whether interactions will result in chaos or increased complex specified information. 

An increase in information requires not just one right choice (or lucky draw), but a long sequence of correct choices. Luck might happen once, but any gambler knows that if “lucky outcomes” keep happening against the odds, then the game is rigged. A “rigged game” in nature corresponds to a law of physics — in this case, a law causing information to increase over time by natural causes. Such a law cannot really exist, however, since we already have a law of nature that says the opposite. As I mentioned in a recent article, “Theistic Cosmology and Theistic Evolution — Understanding the Difference”: 

In our study of science, we have found that the laws of nature do not contradict one another. We don’t have laws of nature that only apply piecemeal.

Imagination and Freedom

Only by the action of non-physical intelligence can the natural process of decoherence and information loss be overcome. Information is meaningless apart from a rational mind, meaning the creation of new information requires more than knowledge. Increased information requires imagination and the freedom to creatively design complex outcomes that convey meaning or exhibit function. (See, “Intelligence Is Unnatural, and Why That Matters.”)

The non-physical aspect of our intelligent minds can succeed in producing information because an intelligent mind can imagine a meaningful outcome and act to separate the components of a complex system from their natural mixed state into specific arrangements that actualize that outcome. This takes work, meaning it requires energy, but not energy alone, since without the guidance of a non-physical mind, energy cannot succeed in increasing information in a closed system. Intelligent design remains the only explanation consistent with the laws of physics for the increasing information content of living systems throughout life’s history on Earth.

Peace on earth : the real final frontier?

 William Shatner and Our Privileged Planet


William Shatner wrote about the experience of space flight, back in December in The Guardian:

Last year, at the age of 90, I had a life-changing experience. I went to space, after decades of playing a science-fiction character who was exploring the universe and building connections with many diverse life forms and cultures. I thought I would experience a similar feeling: a feeling of deep connection with the immensity around us, a deep call for endless exploration. A call to indeed boldly go where no one had gone before.

I was absolutely wrong. As I explained in my latest book, what I felt was totally different. I knew that many before me had experienced a greater sense of care while contemplating our planet from above, because they were struck by the apparent fragility of this suspended blue marble. I felt that too. But the strongest feeling, dominating everything else by far, was the deepest grief that I had ever experienced.

While I was looking away from Earth, and turned towards the rest of the universe, I didn’t feel connection; I didn’t feel attraction. What I understood, in the clearest possible way, was that we were living on a tiny oasis of life, surrounded by an immensity of death. I didn’t see infinite possibilities of worlds to explore, of adventures to have, or living creatures to connect with. I saw the deepest darkness I could have ever imagined, contrasting starkly with the welcoming warmth of our nurturing home planet.

I worry about the world my grandchildren will be living in when they are my age

This was an immensely powerful awakening for me. It filled me with sadness. I realised that we had spent decades, if not centuries, being obsessed with looking away, with looking outside. I played my part in popularising the idea that space was the final frontier. But I had to get to space to understand that Earth is, and will remain, our only home. And that we have been ravaging it, relentlessly, making it uninhabitable.

Shatner‘s surprising revelation may be interpreted by some as an environmentalist creed, but I rather see it as a poetical formulation of the fine-tuning of Earth for life. It shows that atheists like Bill Nye are wrong when they say we are just an insignificant speck of dust in an average galaxy. We clearly are special and significant and certainly not an accident.

On a related subject, I was asked recently, “Why would a God create such an enormous universe and only give life to one tiny speck of dust?” Well, imagine you would like to teach two important lessons to your creatures: 1) you are very special; 2) but don‘t become a megalomaniac because God is infinitely greater than you. The universe would get both points across pretty well.

Privatize everything?

 

Titans in the arena.

 

Darwin cross examined?

 

The Americas: a brief history.

 

On propaganda and propagandists?

 

Return of the soldier of fortune?

 

The future of aviation?

 

Another clash of Titans.

 

The ins and outs of black holes?

 

SETI And the elephant in the room.

 The Search for ET Artifacts Misses the Elephant.


Now that NASA’s freakishly powerful James Webb Space Telescope is fully operational, there is renewed chatter about the possibility of discovering extra-terrestrial life. Our galaxy alone is filled with hundreds of billions of stars, and the thinking among many space geeks is that some of these are bound to support living planets, and some of those are bound to have fostered intelligent creatures who have developed technology capable of sending radio signals and even spaceships to distant star systems, including our own.

An Elusive Mediocrity

Then there are the skeptics. In the summer of 1950, four top physicists were discussing the possibility that the universe is teaming with advanced extra-terrestrials. Later, when they sat down to lunch, one of them blurted, “But where is everybody?” In other words, if the universe is chock full of extra-terrestrial space cowboys, why aren’t we overrun with them? Instead, at the time there was not one verified piece of evidence of any extra-terrestrials, the various dodgy UFO reports in the popular media of the day notwithstanding. 

The man who posed the question was Italian-American physicist Enrico Fermi, and his puzzler has come to be known as “Fermi’s Paradox.” The paradox persists because there still isn’t a single verified piece of evidence for extraterrestrials, and this despite hundreds of millions of dollars spent searching for them.

Of course, the situation is only puzzling if we assume nature burps up living planets and intelligent life as effortlessly as Michigan highways breed potholes in spring. Fermi’s Paradox dissolves if one assumes that a living planet like ours is extraordinarily rare, perhaps even unique. The problem is that such an explanation runs afoul of a principle cherished by many cosmologists, known as the mediocrity principle. According to this idea, not only are we not the center of the universe, we aren’t special in any way, and to think otherwise is egotistical superstitious benighted Medieval nonsense.

Many opponents of Christianity cherish the mediocrity principle because it undercuts the Judeo-Christian belief that humans are special, the crown of God’s creation. But even some anti-religious materialists have nevertheless abandoned the mediocrity principle, for one or both of two reasons. 

First, there is mounting evidence planets like Earth may be highly unusual, requiring as they do a long list of fortuitously fine-tuned parameters to support life. Second, even with a habitable planet to work with, nature faces the daunting challenge of conjuring a planet’s first life from non-life. Before you can invoke a Darwinian process of random mutation and natural selection, you first need a self-reproducing biological entity, and it’s increasingly clear that even the simplest such cell is extraordinarily sophisticated. How does a lifeless mix of chemicals in a primordial ocean toss together something like that? If it is even possible, it is a freak occurrence, like winning a multi-million-dollar lottery forty years in a row without anyone gaming the system. 

The ET Enlightenment Myth

So, then, maybe we are the only intelligent creatures in the universe, a freak one-off in the history of the cosmos. Some conclude this and go even further to insist that we are utterly alone, without even a God in heaven to fill the void of cosmic loneliness. 

Yes, pretty dark, and I would guess, a pretty unpopular position, perhaps because humans are strongly attracted to the idea that some sort of wise savior figure is out there — if not, God, then ancestral spirits or pagan gods or, in a contemporary incarnation, superhumanly wise and powerful ETs. The films 2001: A Space Odyssey (1968) and Contact (1997) are just two of the many science fiction stories that trade on this theme. 

In Unbelievable: 7 Myths about the History and Future of Science and Religion, historian of science Michael Keas dedicates a chapter to what he calls the “ET enlightenment myth.” He says the cultural loss of faith in a creator God has left a void, and “the ET enlightenment myth now fills that void for many people.”

The Search for Alien Artifacts

Between those animated by this ET salvation myth and those who are simply curious about what is out there among all the billions of stars in the universe, there is a lot of interest in seeing if we have the means and the ingenuity to uncover evidence of extra-terrestrials. To that end, Space.com recently published a piece by Paul Sutter, “If Aliens Have Visited the Solar System, Here’s How to Find Clues They Left.” Sutter shifts the focus from the longstanding Search for Extraterrestrial Intelligence (SETI) to a companion strategy: 

So far, all searches for extraterrestrial life have come up empty. But there is another avenue that is relatively unexplored: the search for extraterrestrial artifacts (SETA). The idea behind this approach is that if aliens become advanced enough, they might want to explore the galaxy.… In the roughly 4.5-billion-year history of the solar system, these aliens would have had plenty of time to swing by our neighborhood and maybe leave a mark.

Sutter then distills a recent paper in which astronomers describe various types of alien remnants we might be able to discover, including “spacecraft, probes and even just trash,” either in space or on the surface of a planet or moon.

How would we tell a piece of ET tech from a natural space object? In many cases it might be a no-brainer, but not necessarily. For instance, it could be a fragment of advanced technology so foreign to ours that we wouldn’t immediately recognize it as technology. And we might have to decide from only a grainy telescopic photograph of the object. 

Nevertheless, Sutter is confident there are many ways we might detect ET artifacts or their aftermath. For instance, any spacecraft able to travel from a distant star system would possess an extraordinarily powerful propulsion system, potentially rendering its exhaust trail visible to the James Webb Space Telescope or the Chandra X-ray Observatory. Alternately, “If aliens opened up a strip mine on Mercury, for example, we would still be able to see it today.” “Or… we may be able to find geochemical anomalies — the result of tinkering with chemical processes on a world (or just outright pollution).”

Hunting for ET artifacts may sound fringe, but it’s being pursued by mainstream astronomers.

SETI, SETA, and Design Detection

Set aside the question of whether such pursuits are money well spent. Consider instead a significant feature of both SETA and SETI. SETI got a major PR boost from the movie Contact, based on a novel by astronomer Carl Sagan. SETI employs radio telescopes to search the heavens for signals from extra-terrestrial civilizations. 

In Contact, a scientist detects a curious signal from the star system Vega, one that repeats a sequence of prime numbers. While the scenario is fictional, it conveys how SETI could readily determine if a signal was indeed from an alien intelligence rather than purely natural, such as the regular radio wave pulses emitted by a neutron star. A signal that embedded a long series of prime numbers isn’t something natural processes can generate. It’s an instance of complex specified information. If the signal embedded the same number over and over, it wouldn’t be complex — e.g., 3, 3, 3, 3, 3, etc. If it embedded a random series of numbers, it wouldn’t be specified. That is, it wouldn’t match an obvious purpose or preexisting pattern. The series of prime numbers embedded in the signal in Contact, however, does. It’s both specified and complex. In our experience, the creation of specified complexity — also known as complex specified information — is strictly the purview of intelligent agents, who, unlike natural processes, routinely generate such information in the form of novels, poems, text messages, software programs, and other artifacts. 

There is much to explore in the logical steps involved in reasoning to intelligent design as the best explanation in such cases, but the gist of it runs like this: both reason and our uniform experience strongly suggest that natural processes do not and cannot produce specified complexity. The one type of cause we have ever witnessed doing so is creative intelligence. So creative intelligence is the best explanation when we find examples of it. 

Strategies for differentiating extra-terrestrial artifacts from natural objects involve similar reasoning. Recall that Sutter says we might discover an ET artifact by looking for “geochemical anomalies — the result of tinkering with chemical processes on a world.” How would we know if a geochemical anomaly was the result of ET activity? Presumably by considering various explanations for the anomalies, both natural and artificial. If the anomalies are beyond the reach of natural processes to produce and possess the signature of intelligent design — that is, specified complexity — the researchers could reasonably infer that the geochemical anomalies were the product of intelligent design.

The Elephant in the Room 

Both SETA and SETI share something in common with the theory of intelligent design (ID). They are searching for evidence of the past creative activity of extra-terrestrial intelligence. The principal difference is that ID researchers do so without question-begging constraints on their search. If nature itself possesses the signature of intelligent activity, so be it. Follow the evidence. Contemporary microscopic technology has uncovered molecular biological machines of astonishing sophistication, far beyond our most advanced technologies. Even the simplest cell capable of self-reproduction is a nanotech factory that puts our most advanced factories to shame, and it requires reams of precisely sequenced biological information, much of it found in the four-character alphabet of DNA.

Physicist and engineer Brian Miller points out that mainstream systems biology operates on the working assumption that biological systems are optimally engineered systems. Many systems biologists, he says, pay lip service to evolutionary theory, but they don’t approach the biological systems they study as contraptions thrown together by mindless natural forces. They approach them as masterpieces of engineering, an approach that is proving extraordinarily fruitful. 

Miller offers an analogy. Imagine that what appears to be a highly advanced alien spacecraft is discovered abandoned in the desert. It’s quite different from our spaceships, right down to the materials used in it. Half the scientists who arrive to study it refuse to believe it’s an alien spacecraft and insist instead that it originated through purely natural processes — rain, wind, erosion, perhaps an odd volcanic eruption. The other half decides that, no, it really is an alien spacecraft, and they set about trying to understand how the various parts are meant to contribute to its overall function. Which group, Miller asks, do you think will make better progress toward understanding the alien craft? The group, of course, who recognize that it’s the work of intelligent and purposive design

The same goes for biology. Those practicing systems biology are making much more progress than those committed to the old, reductionist Darwinian approach. 

Intelligent design biologists are in the camp of the systems biologists, but where they stand out from rank-and-file systems biologists is in adapting the design rubric frankly and wholeheartedly. That is, they are convinced that treating biological systems as works of high-tech engineering is proving fruitful precisely because the biological systems really are the work of high-tech engineering, in this case, an extra-terrestrial intelligence far beyond any depicted in Contact or 2001: A Space Odyssey.


Wednesday, 5 July 2023

Darwinists: did we say a tree? What we meant was..

 Evolutionists Walk it Back Again: Human Evolution is More a Muddy Delta Than a Branching Tree


First it was a tree, then it was a bush, then it was a network, and now it is a muddy delta. The evolutionary model of how the species are supposed to be related has failed over and over. And John Hawks’ latest version of this moving target reveals yet again that the theory of evolution is not explanatory, and that the evidence contradicts the theory. Hawks explains that the latest thinking on how the primates evolved “is no evolutionary tree. Our evolutionary history is like a braided stream.”

Evolution is a blind guide—it is always wrong. It is always pointing in the wrong direction, and evolutionists are always having to walk it back and do their damage control.

When will they learn?

The case for conditionalism

 

On design and "evolution"

 Peer-Reviewed Paper by Discovery Institute Staff Evaluates Synthesis of “Design and Evolution” 


We are pleased to announce the publication of a peer-reviewed open-access article by CSC staff members Stephen Dilley, Brian Miller, Emily Reeves, and myself. Published in the journal Religions, our article is titled “On the Relationship Between Design and Evolution,” and it examines and reviews a scholarly book, The Compatibility of Evolution and Design (Palgrave Macmillan / 2021), which is arguably the best current treatment of the relationship between evolution and intelligent design from an evolutionary point of view. The book is written by E. V. Rope Kojonen, a theologian at the University of Helsinki, who offers a potent argument that mainstream evolutionary biology is fully compatible with a robust biological design argument. On his view, the wings of the hummingbird, for example, display evidence of design while also being the product of natural selection, random mutation, and other processes — all without the need for direct guidance, divine intervention, or intelligent supervision per se.  

We regard Kojonen’s model as nuanced, erudite, and fair-minded. It is a model of fine scholarship and deserves serious attention. Even so, we argue that Kojonen’s conception of design is flawed, as is his attempt to harmonize design with evolution. We support our contentions with both scientific and philosophical arguments. Scientifically, we provide perhaps the most comprehensive defense of Douglas Axe’s research written to date as well as an updated analysis of the bacterial flagellum. Philosophically, we argue that Kojonen’s model undercuts itself. It gives an account of “design detection” that actually conflicts with Kojonen’s own design argument.

A Question about Compatibility

The abstract of the article is as follows:

 A longstanding question in science and religion is whether standard evolutionary models are compatible with the claim that the world was designed. In The Compatibility of Evolution and Design, theologian E. V. Rope Kojonen constructs a powerful argument that not only are evolution and design compatible, but that evolutionary processes (and biological data) strongly point to design. Yet Kojonen’s model faces several difficulties, each of which raise hurdles for his understanding of how evolution and design can be harmonized. First, his argument for design (and its compatibility with evolution) relies upon a particular view of nature in which fitness landscapes are “fine-tuned” to allow proteins to evolve from one form to another by mutation and selection. But biological data run contrary to this claim, which poses a problem for Kojonen’s design argument (and, as such, his attempt to harmonize design with evolution). Second, Kojonen appeals to the bacterial flagellum to strengthen his case for design, yet the type of design in the flagellum is incompatible with mainstream evolutionary theory, which (again) damages his reconciliation of design with evolution. Third, Kojonen regards convergent evolution as notable positive evidence in favor of his model (including his version of design), yet convergent evolution actually harms the justification of common ancestry, which Kojonen also accepts. This, too, mars his reconciliation of design and evolution. Finally, Kojonen’s model damages the epistemology that undergirds his own design argument as well as the design intuitions of everyday “theists on the street”, whom he seeks to defend. Thus, despite the remarkable depth, nuance, and erudition of Kojonen’s account, it does not offer a convincing reconciliation of “design” and “evolution”.

A Flawed Account of Design

We’ll have more to say here about this model in the coming weeks, but for now, it’s worth taking a look at some of the main points from our paper’s conclusion:

In this article, we argued that Kojonen’s account of design is flawed. It requires fine-tuned preconditions (and smooth fitness landscapes) so that evolution can successfully search and build viable biological forms. Yet empirical evidence shows that no such preconditions or fitness landscapes exist. At precisely the place we would expect to find evidence of Kojonen’s type of “design”, we find no such thing. Accordingly, his view of design is at odds with the evidence itself. As such, it is poorly situated to add explanatory value to evolution.

We also contended that Kojonen’s conjunction of “design” and “evolution” is internally fragmented. Recall that Kojonen believes that the complexity of the bacterial flagellum adds to his case for joining “design” to “evolution”. Yet Behe’s irreducible complexity argument shows that the type of design manifest in the bacterial flagellum runs contrary to mainstream evolution. Thus, the very system that provides strong evidence of design also undercuts evolution. In effect, this drives a wedge between the two. Kojonen’s conjunction of “design and evolution” is at war with itself.

We also highlighted the internal tension in Kojonen’s attempt to join “design” and “evolution” with respect to convergent evolution. Kojonen draws on convergence as a key argument for the “laws of form”, which are an important element of fine-tuned preconditions and, thus, his case for design. Yet convergent evolution conflicts with Kojonen’s use of co-option and approach to protein evolution. It also conflicts with the general justification of common ancestry. Thus, this element of Kojonen’s case for design chafes against his own reasoning as well as mainstream evolutionary thought. Internal discord surfaces once again.

In each of these criticisms, we have not targeted evolutionary theory itself. Although we believe that the scientific evidence we have covered counters mainstream evolution, we have set this concern aside in this article. Instead, our criticisms are aimed at Kojonen’s conception of design. We have contended that he does not offer sufficient empirical support for it — and so it adds little explanatory merit to “evolution” — and that some of the evidence he does offer actually conflicts with his commitment to evolution, producing incoherence within his model. (We should note, however, that because of the way Kojonen frames the matter, our criticisms of his view of design do have negative implications for the feasibility of evolutionary theory as he understands it. But this is an implication of our argument based on his own framing. It is not the focus of our argument per se. We will return to this point momentarily.)

Finally, we raised epistemological concerns aimed at the fundamental basis of Kojonen’s understanding of design detection. If our concerns are correct, then they cut deeply against Kojonen’s design argument as well as his defense of the theist on the street. In a nutshell, our worry is that a person who takes Kojonen’s model seriously — or who lived in such a universe — would either have defeaters for her biology-based design beliefs or might not have the cognitive dispositions and beliefs that (in our experience) are foundational to the formation of such beliefs in the first place. Kojonen’s reliance on evolution (and non-agent causes) undermines his basis for design detection, in short.

Stepping back, it is important to reiterate, once again, the many strengths of Kojonen’s treatment. The extensive review we have given here is a credit to a book of remarkable sophistication, precision, and erudition. Only a venerable fortress is worthy of a long siege. The Compatibility of Evolution and Design is the best of its class.

Devastating Implications for Evolution

But there’s one more point worth highlighting about Kojonen’s model. He effectively concedes that evolution won’t work to produce biological complexity unless there is some special “fine-tuning” of the “preconditions” for evolution (which themselves arise from designed laws of nature). We might agree with this framing but we have shown that this fine-tuning does not seem to exist. Therefore not only is his case for marrying design and evolution flawed but — if there are no such preconditions — then evolution itself is impotent. Here’s how we frame this in the final paragraph:

Even so, we bring this article to a close on a poignant note: Kojonen’s model may have devastating implications for mainstream evolutionary theory. Recall that the heart of his proposal is that evolution needs design (in the form of fine-tuned preconditions). Evolution on its own is insufficient to produce flora and fauna. But if we are correct that Kojonen’s conception and justification of design are flawed, then it follows — by his own lights — that evolution is impotent to explain biological complexity. Kojonen’s own account of the efficacy of evolution depends upon the success of his case for design. But if the latter stumbles, then so does the former. In a startling way, Kojonen has set the table for the rejection of evolution. If he has failed to make his case for design, then he has left readers with strong reasons to abandon mainstream evolutionary theory. The full implications of this striking result warrant further exploration.

Despite our critique of Kojonen’s model, we find it stimulating and thoughtful. We invite interested readers of Evolution News to read our article and also to read The Compatibility of Evolution and Design.

Yet more primeval tech vs. Darwinism.

 

On the edge of Darwinism

 

Natural selection as conserved.

 

Photosynthesis vs.Darwinism

 

Paved with good intentions? II

 

It's official: Less is more

 

To brash for Mikhail tal?

 

Darwinism does not compute?

 On Evolutionary Computation

 

 Roman V. Yampolskiy

 

 Editor’s note: Dr. Yampolskiy is Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. In this series, he asks: “What Can and Can’t Darwin’s Algorithm Compute?” See also yesterday’s post, the first in the series, “What Can and Can’t Darwin’s Algorithm Compute?“

Inspired by Darwin’s theory1 of biological evolution, evolutionary computation attempts to automate the process of optimization and problem solving by simulating differential survival and reproduction of individual solutions. From the early 1950s, multiple well-documented attempts to make Darwin’s algorithm work on a computer have been published under such names as Evolutionary Programming12, Evolutionary Strategies13, Genetic Algorithms14, Genetic Programming15, Genetic Improvement16, Gene Expression Programming17, Differential Evolution18, Neuroevolution19, and Artificial Embryogeny20. While numerous variants different in their problem representation and metaheuristics exist21-24, all can be reduced to just two main approaches — Genetic Algorithm (GA) and Genetic Programming (GP).

GAs are used to evolve optimized solutions to a particular instance of a problem such as Shortest Total Path25, Maximum Clique26, Battleship27, Sudoku28, Mastermind23, Light Up29, Graph Coloring30, integer factorization31, 32, or efficient halftone patterns for printers33, and so are not the primary focus of this paper. GPs’ purpose, from their inception, was to automate programming by evolving an algorithm or a program for solving a particular class of problems, for example an efficient34 search algorithm. Software design is the type of application most frequently associated with GPs35, but work in automated programming is also sometimes referred to as “real programing,” “object-oriented GP,” “algorithmic programming,” “program synthesis,” “traditional programming,” “Turing Equivalent (TE) programming” or “Turing-complete GP”36-38. 

Tremendous Growth

The sub-field of computation, inspired by evolution in general, and the Genetic Programing paradigm, established by John Koza in 1990s, in particular are thriving and growing exponentially. This is evidenced both by the number of practitioners and of scientific publications. Petke et al. observe “…enormous expansion of number of publications with the Genetic Programming Bibliography passing 10,000 entries … By 2016 there were nineteen GP books including several intended for students …”16. Such tremendous growth has been fueled, since the early days, by belief in the capabilities of evolutionary algorithms, and our ability to overcome obstacles of limited computational power or data as illustrated by the following comments: 

“We will (before long) be able to run genetic algorithms on computers that are sufficiently fast to recreate on a human timescale the same amount of cumulative optimization power that the relevant processes of natural selection instantiated throughout our evolutionary past … ”39

“As computational devices improve in speed, larger problem spaces can be searched.”40 

“Evolution is a slow learner, but the steady increase in computing power, and the fact that the algorithm is inherently suited to parallelization, mean that more and more generations can be executed within practically acceptable timescales.”41

“We believe that in about fifty years’ time it will be possible to program computers by means of evolution. Not merely possible but indeed prevalent.”42 

“The relentless iteration of Moore’s law promises increased availability of computational resources in future years. If available computer capacity continues to double approximately every 18 months over the next decade or so, a computation requiring 80 h will require only about 1% as much computer time (i.e., about 48 min) a decade from now. That same computation will require only about 0.01% as much computer time (i.e., about 48 seconds) in two decades. Thus, looking forward, we believe that genetic programming can be expected to be increasingly used to automatically generate ever-more complex human-competitive results.”43 

“The production of human-competitive results as well as the increased intricacy of the results are broadly correlated to increased availability of computing power tracked by Moore’s law. The production of human-competitive results using genetic programming has been greatly facilitated by the fact that genetic algorithms and other methods of evolutionary computation can be readily and efficiently parallelized. … Additionally, the production of human-competitive results using genetic programming has facilitated to an even greater degree by the increased availability of computing power, over a period of time, as tracked by Moore’s law. Indeed, over the past two decades, the number and level of intricacy of the human-competitive results has progressively grown. … [T]here is, nonetheless, data indicating that the production of human-competitive results using genetic programming is broadly correlated with the increased availability of computer power, from year to year, as tracked by Moore’s Law.”43

“[P]owerful test data generation techniques, an abundance of source code publicly available, and importance of nonfunctional properties have combined to create a technical and scientific environment ripe for the exploitation of genetic improvement.”40

Tomorrow, “State-of-the-Art in Evolutionary Computation.”

References:

Back, T., Evolutionary algorithms in theory and practice: evolution strategies, evolutionary programming, genetic algorithms. 1996: Oxford university press.

Mayr, E., Behavior Programs and Evolutionary Strategies: Natural selection sometimes favors a genetically” closed” behavior program, sometimes an” open” one. American scientist, 1974. 62(6): p. 650-659.

Davis, L., Handbook of genetic algorithms. 1991: Van Nostrand Reinhold.

Koza, J.R., Genetic programming as a means for programming computers by natural selection. Statistics and computing, 1994. 4(2): p. 87-112.

Petke, J., et al., Genetic improvement of software: a comprehensive survey. IEEE Transactions on Evolutionary Computation, 2017.

Ferreira, C., Gene expression programming: mathematical modeling by an artificial intelligence. Vol. 21. 2006: Springer.

Storn, R. and K. Price, Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. Journal of global optimization, 1997. 11(4): p. 341-359.

Such, F.P., et al., Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning. arXiv preprint arXiv:1712.06567, 2017.

Stanley, K.O. and R. Miikkulainen, A taxonomy for artificial embryogeny. Artificial Life, 2003. 9(2): p. 93-130.

Yampolskiy, R.V., L. Ashby, and L. Hassan, Wisdom of Artificial Crowds—A Metaheuristic Algorithm for Optimization. Journal of Intelligent Learning Systems and Applications, 2012. 4(2): p. 98-107.

Yampolskiy, R.V. and A. El-Barkouky, Wisdom of artificial crowds algorithm for solving NP-hard problems. International Journal of Bio-inspired computation, 2011. 3(6): p. 358-369.

Khalifa, A.B. and R.V. Yampolskiy, GA with Wisdom of Artificial Crowds for Solving Mastermind Satisfiability Problem. Int. J. Intell. Games & Simulation, 2011. 6(2): p. 12-17.

Lowrance, C.J., O. Abdelwahab, and R.V. Yampolskiy. Evolution of a Metaheuristic for Aggregating Wisdom from Artificial Crowds. in Portuguese Conference on Artificial Intelligence. 2015. Springer.

Hundley, M.V. and R.V. Yampolskiy, Shortest Total Path Length Spanning Tree via Wisdom of Artificial Crowds Algorithm, in The 28th Modern Artificial Intelligence and Cognitive Science Conference (MAICS2017). April 28-29, 2017: Fort Wayne, IN, USA.

Ouch, R., K. Reese, and R.V. Yampolskiy. Hybrid Genetic Algorithm for the Maximum Clique Problem Combining Sharing and Migration. in MAICS. 2013.

Port, A.C. and R.V. Yampolskiy. Using a GA and Wisdom of Artificial Crowds to solve solitaire battleship puzzles. in Computer Games (CGAMES), 2012 17th International Conference on. 2012. IEEE.

Hughes, R. and R.V. Yampolskiy, Solving Sudoku Puzzles with Wisdom of Artificial Crowds. Int. J. Intell. Games & Simulation, 2012. 7(1): p. 24-29.

Ashby, L.H. and R.V. Yampolskiy. Genetic algorithm and Wisdom of Artificial Crowds algorithm applied to Light up. in Computer Games (CGAMES), 2011 16th International Conference on. 2011. IEEE.

Hindi, M. and R.V. Yampolskiy. Genetic Algorithm Applied to the Graph Coloring Problem. in MAICS. 2012.

Yampolskiy, R.V., Application of bio-inspired algorithm to the problem of integer factorisation. International Journal of Bio-Inspired Computation, 2010. 2(2): p. 115-123.

Mishra, M., S. Pal, and R. Yampolskiy, Nature-Inspired Computing Techniques for Integer Factorization. Evolutionary Computation: Techniques and Applications, 2016: p. 401.

Yampolskiy, R., et al. Printer model integrating genetic algorithm for improvement of halftone patterns. in Western New York Image Processing Workshop (WNYIPW). 2004. Citeseer.

Yampolskiy, R.V., Efficiency Theory: a Unifying Theory for Information, Computation and Intelligence. Journal of Discrete Mathematical Sciences and Cryptography, 2013. 16(4-5): p. 259-277.

Rylander, B., T. Soule, and J. Foster. Computational complexity, genetic programming, and implications. in European Conference on Genetic Programming. 2001. Springer.

White, D.R., et al., Better GP benchmarks: community survey results and proposals. Genetic Programming and Evolvable Machines, 2013. 14(1): p. 3-29.

Woodward, J.R. and R. Bai. Why evolution is not a good paradigm for program induction: a critique of genetic programming. in Proceedings of the first ACM/SIGEVO Summit on Genetic and Evolutionary Computation. 2009. ACM.

Helmuth, T. and L. Spector. General program synthesis benchmark suite. in Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation. 2015. ACM.

Shulman, C. and N. Bostrom, How hard is artificial intelligence? Evolutionary arguments and selection effects. Journal of Consciousness Studies, 2012. 19(7-8): p. 103-130.

Becker, K. and J. Gottschlich, AI Programmer: Autonomously Creating Software Programs Using Genetic Algorithms. arXiv preprint arXiv:1709.05703, 2017.

Eiben, A.E. and J. Smith, From evolutionary computation to the evolution of things. Nature, 2015. 521(7553): p. 476.

Orlov, M. and M. Sipper, FINCH: A system for evolving Java (bytecode), in Genetic Programming Theory and Practice VIII. 2011, Springer. p. 1-16.

Koza, J.R., Human-competitive results produced by genetic programming. Genetic Programming and Evolvable Machines, 2010. 11(3-4): p. 251-284.

 

 

Tuesday, 4 July 2023

The decay of western civilisation continues apace?

 

Darwinian evolutions love affair with crabs and the crablike?

 

Past time to engineer a climate solution?

 

The house of Abraham is a house divided?

 

The drone wars?

 

Thomas Jefferson on the book of creation.

 Thomas Jefferson’s Embrace of Intelligent Design


Editor’s note: Dr. Meyer’s most recent book is Return of the God Hypothesis: Three Scientific Discoveries That Reveal the Mind Behind the Universe.

On Independence Day, it is appropriate to review the sources of our rights as citizens. There is one source that is more basic than any other, yet that receives less than the attention it deserves. I refer to the idea that there is an intelligent creator who can be known by reason from nature, a key tenet underlying the Declaration of Independence — as well as, curiously, the modern theory of intelligent design.

The birth of our republic was announced in the Declaration through the pen of Thomas Jefferson. He and the other Founders based their vision on a belief in an intrinsic human dignity, bestowed by virtue of our having been made according to the design and in the image of a purposeful creator.

“We Hold These Truths”

As Jefferson wrote in the Declaration, “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights.” If we had received our rights only from the government, then the government could justifiably take them away.

Jefferson himself thought that there was scientific evidence for design in nature. In 1823, he insisted so in a letter to John Adams:

I hold (without appeal to revelation) that when we take a view of the Universe, in its parts general or particular, it is impossible for the human mind not to perceive and feel a conviction of design, consummate skill, and indefinite power in every atom of its composition.

Contemplating everything from the heavenly bodies down to the creaturely bodies of men and animals, he argued:

It is impossible, I say, for the human mind not to believe that there is, in all this, design, cause and effect, up to an ultimate cause, a fabricator of all things from matter and motion.

With such thoughts in mind, he wrote the Declaration, asserting the inalienable rights of human beings derived from “the Laws of Nature and of Nature’s God.”

Still Scientifically Credible?

Is Jefferson’s belief still credible in light of current science? The decades following Darwin’s publication of Origin of Species saw the rise of “social” Darwinism and eugenics, which suggested that the Jeffersonian principle of intrinsic dignity had been overturned.

Taken to heart, Darwin’s view of man does undermine the vision of the Founders. As evolutionary biologist George Gaylord Simpson explained, Darwinism denies evidence of design and shows instead that man is the product of a “purposeless process that did not have him mind.” Fortunately, discoveries in modern biology have challenged this perspective and vindicated Jefferson’s thinking.

Since 1953, when Watson and Crick elucidated the structure of the DNA molecule, biologists have increasingly come to recognize the importance of information to living cells. The structure of DNA allows it to store information in the form of a four-character digital code, similar to a computer code. As Bill Gates has noted, “DNA is like a computer program, but far, far more advanced than any software we’ve ever created.”

No theory of undirected chemical evolution has explained the origin of the digital information in DNA needed to build the first living cell on earth. Yet we know from repeated experience — the basis of all scientific reasoning — that information invariably arises from minds rather than from material processes.

Software programs come from programmers. Information — whether inscribed in hieroglyphics, written in a book, or encoded in radio signals — always comes from a designing intelligence. So the discovery of digital code in DNA points decisively back to an intelligent cause as the ultimate source of the information in living cells.

The growing evidence of design in life has stunning and gratifying implications for our understanding of America’s political history — and for our country’s future. On the anniversary of the Declaration of Independence, the evidence for “Nature’s God,” and thus for the reality of our rights, is stronger than ever.


Jonathan Wells vs. Darwinism's undead.

 

Iconclasm?

 

Past time to rethink hell? Pros and Cons II

 

Past time to rethink hell? Pros and Cons.

 

David Berlinski drives a stake through the heart of NeoDarwinism?

 

The God and Father of Jesus is the most high God? Pros and Cons. III

 

Monday, 3 July 2023

Winter is coming?

 

Yet more on reality's anti-Darwinian bias

 

An E.C.T shaped hole?

 

The walls have ears?

 

Paved with good intentions?

 

Phillip Johnson vs. Charles Darwin.

 

The God and Father of Jesus is the most high God? Pros and Cons II.

 

The God and Father of Jesus Christ is the most high God? Pros and Cons

 

Pre:Darwinian design.

 The Design of the Seminal Fluid and Sperm Capacitation


In a previous article, I discussed the ways in which sperm cells exhibit irreducible complexity. Here, I will discuss the importance of the seminal fluid and how it contributes to the irreducibly complex core of components needed for successful reproduction. I will then consider the process of sperm capacitation, the mechanism that prepares the sperm cells for successful fusion with the egg.

The Seminal Fluid 

As I mentioned in my previous article, between two hundred and five hundred million sperm, surrounded by seminal fluid, are released with each ejaculation. Such huge numbers are necessary in order to have a significant chance of fertilizing the egg, since many hazards confront the sperm cells as they swim through the uterus and uterine tubes. Following ejaculation, millions of the released sperm cells will either flow out of the vagina, or else die in its acidic environment. Sperm cells also need to pass through the cervix and opening into the uterus, which requires passage through the cervical mucus. Though the mucus is thinned to a waterier consistency during the fertile window, making it more hospitable to sperm, millions of sperm cells will nonetheless die attempting to make it through the mucus. Furthermore, the female reproductive tract has immune defenses that protect against pathogens. These defenses can also target and destroy foreign cells like sperm. Antibodies may recognize sperm as foreign invaders and lead to their inactivation or elimination. There are also tiny cilia in the fallopian tube that propel the egg towards the uterus. Some of the remaining sperm will become trapped in the cilia and die. Only a small handful of the original sperm cells will make it as far as the egg. Thus, it is necessary that hundreds of millions of sperm cells are released in order to have a reasonable chance of the egg cell being fertilized.
           Seminal fluid also provides essential nutrients to support the survival and motility of the sperm. These include fructose — which serves as a source of energy for the sperm, fueling the mitochondrial production of ATP — as well as other sugars, amino acids, and enzymes. If the seminal fluid did not contain fructose, to power the mitochondria, this would have drastic implications for sperm cell motility and viability.

The seminal fluid is also alkaline. This is important because the vagina has an acidic pH, produced by the normal flora (bacterial populations) of the vagina. This environment would be unfavorable to sperm cells. But the alkalinity of the seminal fluid helps to neutralize the vagina’s acidic pH, assisting the survival of the sperm.

Following ejaculation, the seminal fluid initially coagulates to form a gel-like consistency. This coagulation helps to keep the semen in the vagina and cervix, preventing it from immediately leaking out and thereby greatly increasing the odds of a successful fertilization. This occurs upon exposure to the air or the alkaline environment of the female reproductive tract, activating clotting factors present in the seminal fluid, including tissue transglutaminase. The transglutaminase converts semenogelin (a major protein in seminal fluid secreted by the seminal vesicles) into a sticky protein called fibrin. Fibrin forms a network-like structure that entraps sperm and other components of the semen. 

If the semen remained in this state, the sperm would be permanently immobile and unable to fertilize the egg. Over time, however, the coagulated semen liquefies due to enzymes present in the fluid that slowly break down the fibrin network, allowing the sperm to move more freely. Anamthathmakula and Winuthayanon note that “The liquefaction process is crucial for the sperm to gain their motility and successful transport to the fertilization site in Fallopian tubes (or oviducts in animals). Hyperviscous semen or failure in liquefaction is one of the causes of male infertility.”1 In fact, targeting these serine proteases has been suggested as a target for novel non-hormonal contraceptives.2From an evolutionary perspective, it is difficult to envision a scenario where semen coagulation evolved, without simultaneously having a mechanism for liquefaction. This is a prime example of a non-adaptive intermediate that is prohibitive to evolution by natural selection.

Sperm Capacitation

In order for a sperm cell to fertilize an egg, it has to undergo capacitation. This takes place in the female reproductive tract. The process of capacitation involves a series of biochemical and physiological changes that prepare the sperm for successful interaction with the egg and is crucial in order for the sperm cell to acquire the ability to fertilize.

When sperm are initially ejaculated, they possess certain molecules and proteins on their surface that inhibit their ability to fertilize an egg. During capacitation, these surface molecules, such as cholesterol and glycoproteins, are removed or modified, allowing the sperm to become more receptive to the egg. As capacitation progresses, the motility pattern of sperm also changes. They undergo hyperactivation, which is characterized by increased amplitude and asymmetrical beating of the tail. Hyperactivated sperm exhibit vigorous movements, which help them to navigate through the female reproductive tract and reach the egg. Capacitation also involves changes in the composition and fluidity of the sperm cell membrane. These changes allow the sperm to better interact with the egg’s zona pellucida. The acrosome becomes primed for the acrosome reaction, which releases these enzymes to allow penetration of the egg membrane. 

Capacitation is associated with an increase in calcium ion influx into the sperm. Calcium plays a crucial role in various intracellular signaling processes that are necessary for sperm function and fertilization. For a much more detailed treatment of what is known about the mechanisms of sperm capacitation, there are good reviews of this subject, to which I direct readers.3,4

Conclusion

In summary, various features of the head, middle piece, and flagellum, together with the properties of the seminal fluid, are critical to the sperm cell’s function of reaching and fertilizing an egg. If any one of these parts is not present or fails to function properly, the sperm cell is rendered completely impotent, and reproduction cannot occur. The phenomenon of human reproduction points to a cause with foresight — one that can visualize a foreordained outcome and bring together everything needed to realize that end goal. There is no cause in the universe that is known to have such a capacity of foresight other than intelligent design.

Notes

Anamthathmakula P, Winuthayanon W. Mechanism of semen liquefaction and its potential for a novel non-hormonal contraception†. Biol Reprod. 2020 Aug 4;103(2):411-426.
Ibid.
Puga Molina LC, Luque GM, Balestrini PA, Marín-Briggiler CI, Romarowski A, Buffone MG. Molecular Basis of Human Sperm Capacitation. Front Cell Dev Biol. 2018 Jul 27;6:72.
Stival C, Puga Molina Ldel C, Paudel B, Buffone MG, Visconti PE, Krapf D. Sperm Capacitation and Acrosome Reaction in Mammalian Sperm. Adv Anat Embryol Cell Biol. 2016;220:93-106.

Probing the dark

 

The real post fossil future?

 

The future postponed?

 

The future of EVs?

 

On the Caiaphas ossuary.

 

Caiaphas : the Watchtower Society's Commentary.

 Caiaphas:

Joseph Caiaphas was the high priest during Jesus’ earthly ministry. (Lu 3:2) He was the son-in-law of High Priest Annas (Joh 18:13; see ANNAS) and was appointed to office by the predecessor of Pontius Pilate, Valerius Gratus, about the year 18 C.E., although some say as late as the year 26 C.E. He held the office until about the year 36 C.E., longer than any of his immediate predecessors, this being due to his skillful diplomacy and cooperation with Roman rule. He and Pilate were reportedly good friends. Caiaphas was a Sadducee.—Ac 5:17.


A ringleader in the plot to do away with Jesus, Caiaphas prophesied, though not of his own originality, that Jesus would shortly die for the nation, and to that end he gave his wholehearted support. (Joh 11:49-53; 18:12-14) At Jesus’ trial before the Sanhedrin, Caiaphas ripped his garments and said: “He has blasphemed!” (Mt 26:65) When Jesus was before Pilate, Caiaphas was undoubtedly there crying: “Impale him! Impale him!” (Joh 19:6, 11); he was there asking for the release of Barabbas instead of Jesus (Mt 27:20, 21; Mr 15:11); he was there shouting: “We have no king but Caesar” (Joh 19:15); he was also there protesting the sign over Jesus’ head: “The King of the Jews” (Joh 19:21).


The death of Jesus did not mark the end of Caiaphas’ role as a chief persecutor of infant Christianity. The apostles were next haled before this religious ruler; they were sternly commanded to stop their preaching, were threatened, and were even flogged, but to no avail. “Every day in the temple and from house to house they continued without letup,” Caiaphas notwithstanding. (Ac 4:5-7; 5:17, 18, 21, 27, 28, 40, 42) The blood of righteous Stephen was soon added to Jesus’ bloodstains on the skirts of Caiaphas, who also armed Saul of Tarsus with letters of introduction so the murderous campaign could be extended to Damascus. (Ac 7:1, 54-60; 9:1, 2) However, not long thereafter Vitellius, a Roman official, removed Caiaphas from office.

File under "Well said" XCV

 "What gets us into trouble is not what we don't know. It's what we know for sure that just ain't so."

Mark Twain

Rome's Praetorian Guard :a brief history.

 

The Ottoman's struggle for the "empire of God"

 

Tom Sowell utters uncommon sense.

 

On empiricism and Darwinism

 The Naked Ape: An Open Letter to BioLogos on the Genetic Evidence


Dennis Venema, professor of biology at Trinity Western University, has written a series of articles that have been noted by evolutionists for their clarity and persuasiveness. So as a collector of evidences and reasons why evolution is a fact, I was interested to see Venema’s articles. What does the professor have to say to help confirm what Samuel Wilberforce rhetorically called “a somewhat startling conclusion”?

One of Venema’s basic points is that the genomes of different species are what we would expect if they evolved. Allied species have similar genomes, and genetic features fall into evolution’s common descent pattern:

If indeed speciation events produced Species A – D from a common ancestral population, we would expect their genomes to exhibit certain features when compared to each other. First and foremost, their overall genome sequence and structure should be highly similar to each other – they should be versions of the same book, with chapters and paragraphs of shared text in the same order. Secondly, the differences between them would be expected to fall into a pattern.

Does the evidence confirm these evolutionary expectations? Venema answers with an emphatic “yes.”

Here Venema is appealing to the empirical evidence. He is comparing the evidence to the theory of evolution, and finding that the evidence confirms evolution’s predictions. This means the theory can be empirically evaluated. And if evolution can be genuinely evaluated empirically, then it is, at least theoretically, possible for evolution to fail. If the evidence can confirm evolution, then it also can disconfirm evolution.

This is important because focusing the attention on the evidence means the non scientific arguments go away and science is allowed to speak. What does it say? Here I will take the opposing view, for it seems that what the science shows is that Venema’s claim, that the genetic evidence confirms evolutionary predictions, is inaccurate.

This is not to say that evolutionary explanations cannot be offered. As philosophers well understand, another sub hypothesis is always possible. Such hypotheses raise more profound questions of parsimony, likelihood and so forth. But it seems that such philosophical questions ought to be addressed after there is a consensus on what the empirical evidence has to say. The goal here is to move toward that consensus. Venema, and evolutionists in general, make a straightforward claim about the evidence. We ought to be able to dispassionately evaluate that claim.

Of course I realize that reaching consensus is not as simple as reading an article. There will be differing interpretations by fair-minded critics. And the topic of origins is certainly not always dispassionate. If you argue against evolution you will be disparaged. My response to such attacks has and always will be to forgive.

One final preliminary is simply to point out that it is a challenge just to do justice to this story. A thorough treatment could easily require an entire volume. But a few, typical, examples will have to suffice. They can provide readers with an approximate understanding how the evidence bears on Venema’s claim.

What does the evidence say?

For starters, phylogenetic incongruence is rampant in evolutionary studies. Genetic sequence data do not fall into the expected evolutionary pattern. Conflicts exist at all levels of the evolutionary tree and throughout both morphological and molecular traits. This paper reports on incongruent gene trees in bats. That is one example of many.

MicroRNAs are short RNA molecules that regulate gene expression, for example, by binding to messenger RNA molecules which otherwise would code for a protein at a ribosome. Increasingly MicroRNAs are understood to be lineage-specific, appearing in a few species, or even in just a single species, and are nowhere else to be found. In fact one evolutionist, who has studied thousands of microRNA genes, explained that he has not found “a single example that would support the traditional [evolutionary] tree.” It is, another evolutionist admitted, “a very serious incongruence.”

Trichodesmium or “sea sawdust,” a genus of oceanic bacteria described by Captain Cook in the eighteenth century and so prolific it can be seen from space, has a unique, lineage-specific genome. Less than two-thirds of the genome of this crucial ammonium-producing bacteria codes for proteins. No other such bacteria has such a low value, and conversely such a large percentage of the genome that is non coding. This lineage-specific genome, as one report explains, “defies common evolutionary dogma.”

It is not unusual for similar species to have significant differences in their genome. These results have surprised evolutionists and there does not seem to be any let up as new genomes are deciphered.

The mouse and rat genomes are far more different than expected. Before the rat genome was determined, evolutionists predicted it would be highly similar to the mouse genome. As one paper explained:

Before the launch of the Rat Genome Sequencing Project (RGSP), there was much debate about the overall value of the rat genome sequence and its contribution to the utility of the rat as a model organism. The debate was fuelled by the naive belief that the rat and mouse were so similar morphologically and evolutionarily that the rat sequence would be redundant.

The prediction that the mouse and rat genomes would be highly similar made sense according to evolution. But it was dramatically wrong.

One phylogenetic Study attempted to compute the evolutionary tree relating a couple dozen yeast species using 1,070 genes. The tree that uses all 1,070 genes is called the concatenation tree. They then repeated the computation 1,070 times, for each gene taken individually. Not only did none of the 1,070 trees match the concatenation tree, they also failed to show even a single match between themselves. In other words, out of the 1,071 trees, there were zero matches. It was “a bit shocking” for evolutionists, as one explained: “We are trying to figure out the phylogenetic relationships of 1.8 million species and can’t even sort out 20 yeast.”

What is interesting is how this false prediction was accommodated. The evolutionists tried to fix the problem with all kinds of strategies. They removed parts of genes from the analysis, they removed a few genes that might have been outliers, they removed a few of the yeast species, they restricted the analysis to certain genes that agreed on parts of the evolutionary tree, they restricted the analysis to only those genes thought to be slowly evolving, and they tried restricting the gene comparisons to only certain parts of the gene.

These various strategies each have their own rationale. That rationale may be dubious, but at least there is some underlying reasoning. Yet none of these strategies worked. In fact they sometimes exacerbated the incongruence problem. What the evolutionists finally had to do, simply put, was to select the subset of the genes that gave the right evolutionary answer. They described those genes as having “strong phylogenetic signal.”

And how do we know that these genes have strong phylogenetic signal. Because they gave the right answer. This raises the general problem of prefiltering of data. Prefiltering is often thought of merely as cleaning up the data. But prefiltering is more than that, for built-in to the prefiltering steps is the theory of evolution. Prefiltering massages the data to favor the theory. The data are, as philosophers explain, theory-laden.

But even prefiltering cannot always help the theory. For even cleansed data routinely lead to evolutionary trees that are incongruent (the opposite of consilience). As one Study explained, the problem is so confusing that results “can lead to high confidence in incorrect hypotheses.” As one paper Explained, data are routinely filtered in order to satisfy stringent criteria so as to eliminate the possibility of incongruence. And although evolutionists thought that more data would solve their problems, the opposite has occurred. With the ever increasing volumes of data (particularly molecular data), incongruence between trees “has become pervasive.”

What is needed now is less data. Specifically, less contradictory data. As one evolutionist Explained, “if you take just the strongly supported genes, then you recover the correct tree.” And what are “strongly supported” genes? Those would be genes that cooperate with the theory. So now in addition to prefiltering we have postfiltering.

Another issue are the striking similarities in otherwise distant species. This so-called convergence is rampant in biology and it takes on several forms.

Consider a Paper from the Royal Society on “The mystery of extreme non-coding conservation” that has been found across many genomes. As the paper explains, there is currently “no known mechanism or function that would account for this level of conservation at the observed evolutionary distances.” Here is how the paper summarizes these findings of extreme sequence conservation:

despite 10 years of research, there has been virtually no progress towards answering the question of the origin of these patterns of extreme conservation. A number of hypotheses have been proposed, but most rely on modes of DNA : protein interactions that have never been observed and seem dubious at best. As a consequence, not only do we still lack a plausible mechanism for the conservation of CNEs—we lack even plausible speculations.

And these repeated designs, in otherwise different species, are rampant in biology. It is not merely a rare occurrence which perhaps evolution could explain as an outlier. That the species do not fall into an evolutionary tree pattern is well established by science.

Furthermore, these repeated designs do not merely occur twice, in two distant species. They often occur repeatedly in a variety of otherwise distant species. So now the evolutionist must not only believe that there are many of these repeating design events, but that in most cases, they repeat multiple times, in disparate species.

Evolutionists have labeled this evidence as recurrent evolution. As a recent paper Explains:

The recent explosion of genome sequences from all major phylogenetic groups has unveiled an unexpected wealth of cases of recurrent evolution of strikingly similar genomic features in different lineages.

In addition, many instances of a third more puzzling phylogenetic pattern have been observed: traits whose distribution is “scattered” across the evolutionary tree, indicating repeated independent evolution of similar genomic features in different lineages.

If the pattern fits the evolutionary tree, then it is explained as common evolutionary history. If not, then it is explained as common evolutionary forces.

With all of this contradictory evidence, even evolutionists have realized in recent years that the traditional evolutionary tree model is failing. As one evolutionist Explained , “The tree of life is being politely buried.”

There are many more fascinating examples of biological patterns that are not consistent with the expected evolutionary pattern. These are not anomalies or rare exceptions. Here we have focused on the genetic level since that was the theme of Venema’s article. It seems that the species and their genomes do not fall into a consistent evolutionary pattern as evolutionists such as Venema claim. This does not mean evolutionists cannot explain any of this. They have a wide spectrum of mechanisms to draw upon, of varying levels of speculation and likelihood. These explanatory mechanisms greatly increase the theory’s complexity. They raise questions of realism, and whether the theory is following the data, or the data is following the theory. But such questions are for another day.

The point here is that evolutionist’s claims that the genomic data broadly and consistently fall into the evolutionary pattern and expectations do not seem to reflect the empirical data. This is the first step in moving the discourse forward. We need to reach consensus on what the evidence reveals.

Next time I will continue with an examination of the next evidences Venema presents.