Search This Blog

Sunday 17 September 2017

Occam's razor to too dull for Darwinism?

With Two New Fossils, Evolutionists Rewrite Narratives to Accommodate Conflicting Evidence
Günter Bechly 


Two new fossils, described in August and September 2017, have again forced evolutionists to rewrite their fanciful narratives of how major transitions in the history of life occurred. In this case the new fossils disarrayed, respectively, the origin of tetrapod land vertebrates and of bird feathers and flight.

The first fossil, described by Lefèvre et al. (2017), is a feathered dinosaur named Serikornis sungei (nicknamed “Silky”), which lived about 160 million years ago during the Upper Jurassic era. Found in China’s Liaoning province, it is a beautifully preserved complete animal with visible dino-fuzz covering its body. It was about the size of a pheasant and its morphology suggests that it was unable to fly and “spent its life scampering around on the forest floor” (Pickrell 2017). The most striking feature is the fact that even though its arms and legs have long feathers, so that the fossil seems to qualify as a member of the four-winged group of “dino-birds” such as Microraptor, Anchiornis, and Xiaotingia, the arms are much too short for wings. The feathers also lack the second order branchings (barbules) of true pennaceous flight feathers.

There are two interesting issues with this remarkable feathered dinosaur (and no, it does not seem to be a forgery like the “missing link” Archaeoraptor, Rowe et al. 2001):

The distribution and type of feathers on its body are not consistent with the currently preferred scenario about the evolution of bird feathers and flight. That scenario assumes that long pennaceous feathers on arms and legs originated with arboreal four-winged gliders such as Microraptor (Pickrell 2017)
The new phylogenetic tree in the original publication by Lefèvre et al. again reshuffles the feathered dinosaurs and early birds into a new branching pattern, disagreeing with previous trees that, in turn, all disagree with each other. Constructing phylogenetic trees looks more and more like an arbitrary enterprise, evolutionary biology’s equivalent of other pseudoscientific methods such as psychoanalysis or the Rorschach test.
The second fossil discovery, by  Zhu et al. (2017), is a new species of lobe-finned fish named Hongyu chowi from the Late Devonian. Discovered at the Shixiagou quarry in northern China, it was about 1.5 metres long, and lived 370 to 360 million years ago. One of its describers happens to be the famous Swedish paleontologist Per Ahlberg from Uppsala University, who also made worldwide headlines this month (e.g.,Ancient footprints in Greece trample on the theory of human evolution,” in The Times of London) with the description of 5.7-million-year-old human footprints from Crete (see Bechly 2017).

Barras (2017) announces at New Scientist that this “Weird fish fossil changes the story of how we moved onto land.” From the article:


[W]hen the researchers tried to fit H. chowi into the existing evolutionary tree, it didn’t fit easily.

That’s because in some respects, H. chowi looks like an ancient predatory fish called rhizodonts. These are thought to have branched off from lobe-finned fish long before the group gave rise to four-legged land animals.

But Ahlberg says H. chowi has aspects that look surprisingly like those seen in early four-legged animals and their nearest fishy relatives —an extinct group called the elpistostegids. These include the shoulder girdle and the support region for its gill covers.

This implies one of two things, the researchers say. The first possibility is that H. chowi is some sort of rhizodont that independently evolved the shoulders and gill cover supports of a four-legged animal.

Alternatively, the rhizodonts may be more closely related to the four-legged animals and the elpistostegids than we thought. But this would also imply a certain amount of independent evolution of similar features, because the rhizodonts would then sit between two groups that have many features in common – features the two groups would have had to evolve independently. …

The find confirms an earlier suspicion that there was independent or “parallel” evolution between the rhizodonts, the elpistostegids and the first four-legged animals, says Neil Shubin at the University of Chicago.

Thus, this fossil raises two important problems for evolutionary biology:

The character distribution is incongruent and implies independent parallel origins of the same tetrapod-like or rhizodont-like characters (convergence). The alternative explanations of independent origin (homoplasy) versus common origin (homology) of a character trait is not alone decided based on anatomic (dis)similarities but mainly based on the (in)congruence with other data. The same data that are considered evidence of convergence can become evidence for common ancestry when you switch positions in the tree, and vice versa. What most evolutionary biologists have exorcised from their mind is that such incongruences (homoplasies) per se are not evidence for evolution as some evolutionists boldly proclaim (Wells 2017) but, instead, prima facie conflicting evidence against it (Hunter 2017). Convergence, which Lee Spetner has called “even more implausible than evolution itself” (Klinghoffer 2017), and other incongruent similarities have to be explained away with ad hoc hypotheses. In past decades, convergence morphed from an inconvenient exception to the rule — to a ubiquitous phenomenon, found virtually everywhere in living nature. In his book Life’s Solution, paleontologist Conway Morris (2003) felt compelled to declare it a kind of necessary natural law. It thus cannot really be considered a success story for the Darwinian paradigm.
Rhizodontids,the group to which this fossil fish belongs, are believed to have branched off early from the lobefin-tetrapod lineage, more than 415 million years ago. However, the oldest fossils are dated to only 377 million years ago, implying a so-called “ghost lineage” of 38 million years when the group should have existed but left no fossil record at all. Such “ghost lineages” are one of the many instances of discontinuity in the fossil record and require ad hoc assumptions in order to be accommodated by evolutionary storytelling.
These two new fossils represent further evidence conflicting with previously accepted evolutionary narratives. But thank God evolutionary theory can easily adapt to such inconvenient evidence, simply by rewriting the story. That way, the new evidence fits perfectly.

Dubious procedures like these would be unthinkable in other natural sciences, such as physics. They call into question whether evolutionary biology really qualifies as a hard science at all. Arguably it is not a testable theory, or even a well-defined one, but merely a loose collection of narratives that are forged to fit the evidence — any evidence whatsoever.

Literature:

Barras C 2017. Weird fish fossil changes the story of how we moved onto land.  New Scientist 4 September 2017.
Bechly G 2017. Fossil Footprints from Crete Deepen Controversy on Human Origins. Evolution News September 6, 2017.
Carassava A 2017. Ancient footprints in Greece trample on the theory of human evolution.  The TimesSeptember 4 2017.
Hunter C 2017. The Real Problem With Convergence. Evolution News May 25, 2017.
Klinghoffer D 2017. “Convergent Evolution Is Even More Improbable than Evolution Itself.”  Evolution News September 5, 2017.
Lefèvre U, Cau A, Cincotta A, Hu D, Chinsamy A, Escuillié F, Godefroit P 2017. A new Jurassic theropod from China documents a transitional step in the macrostructure of feathers.  The Science of Nature 104:74.
Conway Morris S 2003. Life’s Solution: Inevitable Humans in a Lonely Universe. Cambridge University Press.
Pickrell J 2017. New Feathered Dinosaur Had Four Wings but Couldn’t Fly.  National Geographic August 28, 2017.
Rowe T et al. 2001. Forensic palaeontology: The Archaeoraptor forgery. Nature 410: 539-540.
Wells J 2017. Zombie Science: Jonathan Wells on Convergence Versus Common Ancestry.  Evolution News June 28, 2017.

Zhu M, Ahlberg PE, Zhao W-J, Jia L-T 2017. A Devonian tetrapod-like fish reveals substantial parallelism in stem tetrapod evolution. Nature Ecology & Evolution.

Talk about fighting for your life!

Wesley Smith Visits Jahi McMath
David Klinghoffer | @d_klinghoffer  

Jahi McMath is a neurologically disabled young woman who, like Schrödinger’s cat, is, or has been, both dead and alive. Our colleague Wesley Smith visited the patient and offers a powerful account for First Things.

In California, Jahi McMath is legally dead. In New Jersey, she is legally alive. Now, the deceased — or profoundly disabled — teenager is the subject of litigation that could make history.
Wesley witnessed what appeared to be a response on Jahi’s part to a request that she move her thumb and index finger — “I nearly jumped out of my shoes,” he reports. I think I would, too.

He has changed his mind about her status.

If Jahi is not — or, perhaps better, no longer — brain dead, this may be an unprecedented event, as there are no known cases of a properly diagnosed brain-dead patient experiencing restored neurological function. And I am stunned that the medical and bioethics communities generally show such a pronounced lack of curiosity about Jahi’s situation. True, there have been rare cases of the bodies of brain-dead people not deteriorating over time. But surely the other factors described by [Dr. Alan] Shewmon and the videos should pique their interest.

Perhaps it is just a case of “experts” not wanting to know — because if Jahi isn’t dead, it would have epochal legal, social, medical, and scientific ramifications. But so what? Jahi deserves justice. If alive, she is a full and equal member of the moral community.

I hope that several prominent neurologists without a stake in the situation will step forward and volunteer to examine Jahi — and not just for a day or two but over an extended period of time, to test her brain and body functions thoroughly and determine whether she does indeed respond to requests. Then, if she lacks even one criterion for brain death, Jahi’s California death certificate should be revoked — let the chips fall where they may.

No longer brain dead? This is quite remarkable and testifies among other things to the tenacious commitment of Jahi’s mother, Nailah, whom Wesley also interviewed, to “choose life” on behalf of her daughter. Read the rest here.

Yet more on the undeniability of the case for design.

The Unmistakable Imprint of Purpose — Response to a Theistic Evolutionist
Douglas Axe | @DougAxe 


I’ve been discussing my book  Undeniable with Hans Vodder, who favors the evolutionary explanation of life. In our fifth exchange,Hans referred to what has been called a “natural nuclear reactor.” Whatever it was, it seems to have existed eons ago in the rock formations of what is now the Oklo region of Gabon. Hans thinks this so-called reactor may have exhibited the kind of functional coherence I point to as a hallmark of invention, making it a noteworthy counterexample to my argument.

responded by suggesting that reactor is an overblown term for what was really nothing more than a reaction. Back in its day, that Oklo reaction required only: 1) a moderately large uranium deposit, and 2) a source of water to percolate through it. This, I said, doesn’t qualify as high-level functional coherence.

I brought up the comparison to an adjustable wrench — a very modest example of functional coherence. How is it, I asked, that people expected natural nuclear reactions to be found but no one expects a natural adjustable wrench to be found?

Here is Hans’s reply:

Your two-condition assessment of the requirements for the reactor seems a little minimalistic. The Kuroda study (cited in the  Scientific American article) mentions four general conditions, and Maynard Smith and Szathmary describe further particulars in  The Major Transitions in Evolution. The latter authors mention things like the 45 degree tilt of the sandstone and underlying granite layers which allowed the reactions to occur in a self-sustaining manner; the increased solubility of oxidized uranium, which helped it accumulate in the delta in the first place; and so on. So whether something is considered functionally coherent might depend largely on how relevant conditions are assessed.

But suppose we go with the two-condition assessment. There still doesn’t seem to be anything in the bare definition of functional coherence — “the hierarchical arrangement of parts needed for anything to produce a high-level function — each part contributing in a coordinated way to the whole” (144) — that would exclude Oklo. Consider this adaptation of figure 9.3 from your book:




A critic might reasonably argue that Oklo qualifies as functionally coherent. To make the case more “extensive,” all said critic would have to do is add a few more components or break down the current ones into further detail.

Now, I  suspect this doesn’t really capture the property of “functional coherence” that Undeniable is after. But here’s the rub: It does seem, so far as I can tell, to satisfy the definitional criteria laid out in the book. If that’s right, then one of two conclusions seems to follow. Either:

a) Oklo is a genuine case of functional coherence, with the result that nature can produce at least some functional coherence (pace the “Summary of the Argument” on p. 160 of Undeniable), or

b) Oklo is only a superficial case of functional coherence, with the result that further criteria are needed to distinguish Oklo-type cases from genuine ones.

Sure. Going strictly from my bare definition, one could easily call things functionally coherent that, as you say, don’t really capture the sense of the term as used in Undeniable.

So, I agree with your b option, except I would say context is needed instead of criteria, and I think the book supplies that context. In other words, my definition of functional coherence should make sense to readers who have followed the discussion up to the point where that term is introduced (more on that in a moment).

With respect to Oklo, keep in mind that Maynard Smith and Szathmary, being evolutionary biologists, wanted to “see some parallels with the origin of life” (p. 20 of their book). This raises the possibility that they saw what wasn’t really there.

Here’s a more objective account of the requirements for a uranium-235 fission chain reaction from the Department of Physics and Astronomy at Georgia State University:

If at least one neutron from U-235 fission strikes another nucleus and causes it to fission, then the chain reaction will continue. If the reaction sustains itself, it is said to be “critical,” and the mass of U-235 required to produce the critical condition is said to be a “critical mass.” A critical chain reaction can be achieved at low concentrations of U-235 if the neutrons from fission are moderated to lower their speed, since the probability for fission with slow neutrons is greater.
Strictly speaking, then, all that’s needed for a sustained nuclear chain reaction is a critical mass of uranium-235. Uranium is number 92 on the periodic table of elements, so it literally belongs on the bottom layer of the hierarchy in that figure (you can’t get more elementary than an element). Despite all the fuss over Oklo, then, it really didn’t rise above the behavior of that one element.

Interestingly, Maynard Smith and Szathmary acknowledged the obvious fact that “the Oklo reactor does not look like a man-made reactor.” My question is — what makes this so obvious?

The answer is hinted at by the way these authors inadvertently portray the Oklo reaction as though it had been intended. For example, they say “some moderator material was needed to slow down neutrons.”

Needed?

The raw fact is merely that water happened to be present as a moderator, causing the neutrons to be slowed. To say that water was needed is to imply not just that something important was at stake but that this was somehow recognized at the time — as though the Oklo reaction had been arranged for a purpose.

Neither Maynard Smith nor Szathmary believed that for a moment, but somehow they couldn’t keep themselves from implying it. Their next sentence reads: “Perhaps most surprising of all, the reaction had to be self-regulating.” Really? Who says it had to be self-regulating? And what definition of “regulate” are we using here? All the usual definitions invoke purpose.

Returning to raw facts, the evidence suggests that the rate of fission at Oklo may have oscillated, peaking when water was present and dropping when it was absent. But again, so what? Fission chain reactions are always limited. They end when critical mass is exhausted. So the suggestion that Oklo was “regulated” is just plain odd. Regulated to what end?

By the time I introduce readers of Undeniable to functional coherence, they know the book is all about purpose. In that context, they know purpose is at the heart of this term. Our design intuition tells us that “tasks that we would need knowledge to accomplish can be accomplished only by someone who has that knowledge.” Despite their exaggeration, Maynard Smith and Szathmary knew full well that the Oklo reaction wasn’t an accomplishment. Indeed, we can’t see Oklo as a task that required things to be cleverly arranged because it shows absolutely no sign of having been that.

With sufficient cleverness and determination, the elements from the periodic table can be arranged to accomplish tasks that aren’t even hinted at by the properties of the elements themselves — challenges whose solutions have to be dreamed up by fertile imaginations. We instantly spot the fruits of those imaginations by spotting their characteristic functional coherence. Dragonflies. Smartphones. Nuclear power plants. Even adjustable wrenches.

I’m sure this reasoning can be formalized, Hans, but do keep in mind that Undeniable isn’t meant to be that. Undeniable shows in a commonsensical way how it is that no one confuses things like radioactive rocks for inventions and, conversely, how no one confuses things like adjustable wrenches for accidents.

Nothing you’ve said so far challenges that main thesis, Hans. So if we’re in agreement there, I’m wondering why you think life should be exempt from reasoning that works everywhere else.

Saturday 16 September 2017

Intelligence is not magical.

Intelligent Design and Methodological Naturalism — No Necessary Contradiction
Evolution News @DiscoveryCSC

Another  corespondent draws our attention to a comment from atheist and “poetic naturalist” Sean Carroll, in his recent book  The Big Picture: On the Origins of Life, Meaning, and the Universe Itself

Science should be interested in determining the truth, whatever that truth may be – natural, supernatural, or otherwise. The stance known as methodological naturalism, while deployed with the best of intentions by supporters of science, amounts to assuming part of the answer ahead of time. If finding truth is our goal, that is just about the biggest mistake we can make.

Such a statement may or may not be surprising to you, considering the source. Well, what about it? Methodological naturalism (MN) in relationship to intelligent design has been a source of some discussion and confusion over the years.

A Reasonable Definition
A reasonable definition of MN is: “The belief that, whether or not the supernatural exists, we must pretend that it doesn’t when practicing science.” This idea was neatly expressed in a letter to the editor published in Nature:

Even if all the data point to an intelligent designer, such an hypothesis is excluded from science because it is not naturalistic.

(Scott C. Todd, “A view from Kansas on that evolution debate,” Nature, Vol. 401:423 (Sept. 30, 1999))

For a list of other sources similarly claiming MN is a requirement of science, please see, "Primer: Naturalism in Science.”

Now obviously, many critics of the theory of intelligent design (ID) maintain that ID isn’t science because they claim that MN is a “rule” that all science must conform to and obey. For a moment, leave aside whether MN makes a good “rule” for doing science. Let’s approach the question of whether ID is science pragmatically, and, for the sake of argument, let us assume MN. Even if we accept the rationale behind MN, ID is not excluded from being scientific.

ID Doesn’t Violate the Letter of MN
MN states that science cannot appeal to the supernatural. But ID does not appeal to the supernatural, and thus does not require non-natural causes.

ID begins with observations of the types of information and complexity produced by intelligent agents. Intelligent agents are natural causes that we can understand by studying the world around us. This makes intelligent agency a proper subject of scientific study. When ID finds high levels of complex and specified information, or CSI, in nature, the most it can infer is that intelligence was at work. Because ID respects the limits of scientific inquiry, it does not make claims beyond the data by trying to identify the designer.

Stephen Meyer explains:

Though the designing agent responsible for life may well have been an omnipotent deity, the theory of intelligent design does not claim to be able to determine that. Because the inference to design depends upon our uniform experience of cause and effect in this world, the theory cannot determine whether or not the designing intelligence putatively responsible for life has powers beyond those on display in our experience. Nor can the theory of intelligent design determine whether the intelligent agent responsible for information life acted from the natural or the “supernatural” realm. Instead, the theory of intelligent design merely claims to detect the action of some intelligent cause (with power, at least, equivalent to those we know from experience) and affirms this because we know from experience that only conscious, intelligent agents produce large amounts of specified information.

(Stephen C. Meyer, Signature in the Cell, pp. 428-429 (HarperOne, 2009))

Many other ID proponents have pointed out that ID only appeals to intelligent causes, not supernatural ones. Michael Behe writes:

[A]s regards the identity of the designer, modern ID theory happily echoes Isaac Newton’s phrase hypothesis non fingo [“I frame no hypothesis”].

(Michael Behe, “The Modern Intelligent Design Hypothesis,” Philosophia Christi, 2 (3): 165 (2001))

William Dembski and Jonathan Wells explain:

Supernatural explanations invoke miracles and therefore are not properly part of science. Explanations that call on intelligent causes require no miracles but cannot be reduced to materialistic explanations.

(William Dembski and Jonathan Wells, The Design of Life: Discovering Signs of Intelligence in Biological Systems, pp. 13-14 (FTE, 2008))

Likewise, an early ID textbook affirms MN, stating:

[I]ntelligence…can be recognized by uniform sensory experience, and the supernatural…cannot.

(Percival Davis and Dean H. Kenyon, Of Pandas and People, p. 126 (FTE, 1993))

Now some might argue that ID violates MN by leaving open the possibility of a supernatural designer. It true that ID leaves open such a possibility. But ID does not claim to scientifically detect a supernatural creator. Again, the most ID claims to detect is intelligent causation. Many (though not all) ID proponents may believe the designer is God, but they do not claim this is a scientific conclusion of ID. In this respect, ID is no different from Darwinian evolution, which claims that if there is a supernatural creator, that would be beyond science’s power to detect.

ID Doesn’t Offend the Spirit of MN
Proponents of MN often justify this “rule” by arguing that it ensures that science uses only testable, predictable, and reliable explanations. However, ID generates testable hypotheses based upon our knowledge of how the world works, and can be reliably inferred through the scientific method. In this way, intelligent design does not violate any mandates of predictability, testability, or reliability laid down for science by MN.

For details on how ID makes testable predictions, please see the following:


We’ve remarked on this before, but clarification is always in order.

Saturday 9 September 2017

Is it time to hit the reset button re:origin of life science?

Origin-of-Life Research: Start Over


More just so stories to explain away human exceptionalism.


From The Economist story:Of bairns and brains
Babies are born helpless, which might explain why humans are so clever


HUMAN intelligence is a biological mystery. Evolution is usually a stingy process, giving animals just what they need to thrive in their niche and no more. But humans stand out. Not only are they much cleverer than their closest living relatives, the chimpanzees, they are also much cleverer than seems strictly necessary. The ability to do geometry, or to prove Pythagoras’s theorem, has turned out to be rather handy over the past few thousand years. But it is hard to imagine that a brain capable of such feats was required to survive on the prehistoric plains of east Africa, especially given the steep price at which it was bought. Humans’ outsized, power-hungry brains suck up around a quarter of their body’s oxygen supplies.
Sexy brains
There are many theories to explain this mystery. Perhaps intelligence is a result of sexual selection. Like a peacock’s tail, in other words, it is an ornament that, by virtue of being expensive to own, proves its bearers’ fitness. It was simply humanity’s good fortune that those big sexy brains turned out to be useful for lots of other things, from thinking up agriculture to building internal-combustion engines. Another idea is that human cleverness arose out of the mental demands of living in groups whose members are sometimes allies and sometimes rivals.
Now, though, researchers from Rochester University, in New York, have come up with another idea. In Proceedings of the National Academies of Science, Steven Piantadosi and Celeste Kidd suggest that humans may have become so clever thanks to another evolutionarily odd characteristic: namely that their babies are so helpless.
Compared with other animals, says Dr Kidd, some of whose young can stand up and move around within minutes of being born, human infants take a year to learn even to walk, and need constant supervision for many years afterwards. That helplessness is thought to be one consequence of intelligence—or, at least, of brain size. In order to keep their heads small enough to make live birth possible, human children must be born at an earlier stage of development than other animals. But Dr Piantadosi and Dr Kidd, both of whom study child development, wondered if it might be a cause as well as a consequence of intelligence as well.
Their idea is that helpless babies require intelligent parents to look after them. But to get big-brained parents you must start with big-headed—and therefore helpless—babies. The result is a feedback loop, in which the pressure for clever parents requires ever-more incompetent infants, requiring ever-brighter parents to ensure they survive childhood.
It is an elegant idea. The self-reinforcing nature of the process would explain why intelligence is so strikingly overdeveloped in humans compared even with chimpanzees. It also offers an answer to another evolutionary puzzle, namely why high intelligence developed first in primates, a newish branch of the mammals, a group that is itself relatively young. Animals that lay eggs rather than experiencing pregnancy do not face the trade-off between head size at birth and infant competence that drives the entire process.
To test their theory, Dr Piantadosi and Dr Kidd turned first to a computer model of evolution. This confirmed that the idea worked, at least in principle. They then went looking for evidence to support the theory in the real world. To do that they gathered data from 23 different species of primate, from chimps and gorillas to the Madagascan mouse lemur, a diminutive primate less than 30cm long.
The scientists compared the age at which an animal weaned its young (a convenient proxy for how competent those young were) with their scores on a standardised test of primate intelligence. Sure enough, they found a strong correlation: across all the animals tested, weaning age predicted about 78% of the eventual score in intelligence. That correlation held even after controlling for a slew of other factors, including the average body weight of babies compared with adults or brain size as a percentage of total body mass.
The researchers point to other snippets of data that seem to support their conclusions: a study of Serbian women published in 2008, for instance, found that babies born to mothers with higher IQs had a better chance of surviving than those born to low-IQ women, which bolsters the idea that looking after human babies is indeed cognitively taxing. But although their theory is intriguing, Dr Piantadosi and Dr Kidd admit that none of this adds up to definitive proof.

That, unfortunately, can be the fate of many who study human evolution. Any such feedback loop would be a slow process (at least as reckoned by the humans themselves), most of which would have taken place in the distant past. There are gaps in the theory, too. Even if such a process could drastically boost intelligence, something would need to get it going in the first place. It may be that some other factor—perhaps sexual selection, or the demands of a complex environment, or some mixture of the two—was required to jump-start the process. Dr Piantadosi and Dr Kidd’s idea seems a plausible addition to the list of explanations. But unless human intelligence turns out to be up to the task of building a time machine, it is unlikely that anyone will ever know for sure. 

Re:Darwinism How many trials,How many errors.

What is the maximum number of trials evolution could have performed?
 Kirk Durston

There are countless people who use the following rationale to justify why there was no need for an intelligent creator behind life – evolution has had a near-infinite number of trials in which to create the full diversity of life, including its molecular machines, molecular computers, and digitally encoded genomes. Here, we will take an opportunity to examine these points more closely.

In other scientific disciplines, the first step one must take before figuring out a solution, is to establish the boundary conditions within which a problem must be solved. Since we should require the same standard of scientific rigour from evolutionary biology, let us calculate an extreme upper limit for the total number of evolutionary trials one could expect over the history of life.

An estimate for the total number of bacteria on earth is 3.17 x 10^30.(1,2) In comparison, all other life occurs in relatively insignificant numbers, too many orders of magnitude smaller to matter. Nonetheless, to be generous, let us add .03 x 10^30 other life forms in order to get 3.2 x 10^30 life forms on the planet (starting from the moment the earth cooled enough to permit this).

The larger the genome, the more opportunities there are for mutations to occur. Let us assume a generous average genome size of 100,000 possible protein coding genes. When I say ‘possible’, I include ‘junk’ DNA as fertile ground for new genes.

Since a mutation can change the sequence of a gene, is it possible for evolution to try different gene sequences in sequence space in order to ‘discover’ a novel, functional protein family?

Let us assume there is a fast mutation rate of 10^-3 mutations per possible gene per replication. Given 10^5 possible genes per organism, each lineage should be able to ‘try out’ 100 new possible gene sequences per generation. To make our evolutionary search more efficient, we will also assume that no sequence was ever tried twice over the entire history of life.

Finally, let us use a fast replication rate (for nature) of once every 30 minutes over a 4 billion year period, for a total of 7 x 10^13 generations. These very generous parameters allow us to calculate an upper limit for the total number of evolutionary trials over four billion years.

Total number of possible genes sampled per single lineage over 4 billion years = 7 x 10^15

Extreme upper limit for the total number of possible gene families sampled for all of life over 4 billion years = 2.2 x 10^45 trials.

I have been extremely generous – by two orders of magnitude in comparison to a peer reviewed estimate for ‘an extreme upper limit’ of 4 x 10^43 trials (3). Since Dryden estimates 10^43 as his ‘extreme upper limit’, and it is peer reviewed, we will use his estimate instead of mine.

Stable, functional 3D protein structures are determined by physics, not biology, therefore, we can regard each protein family as a target in sequence space that evolution must find. With 10^43 trials, one would think there would be no problem. Unfortunately, there are virtually no sequences that will produce stable, functional 3D structures. For example, RS7 is a universal protein required for all life forms, yet only 1 in 10^100 sequences will produce a functional RS7 protein domain.

Obviously, in order for evolution to find any RS7 sequences, 10^43 trials is woefully inadequate – by 57 orders of magnitude. As I have shown elsewhere, RS7 requires 332 bits to encode, well within the range of what an intelligent mind can produce. Therefore, what options should we examine?

1) novel protein family sequences were discovered through random genetic drift.

2) novel protein families were discovered via an evolutionary search guided by natural selection.

3) novel protein family sequences were encoded by an intelligent mind.

As I have already established, 3) can be scientifically tested and verified, so it definitely serves as a viable explanation. We shall look at 1) and 2) more carefully in future posts.

References:

K. Lougheed, ‘There are fewer microbes out there than you think’, Nature, (2012).
J. Kallmeyer et al., ‘Global distribution of microbial abundance and biomass in subseafloor sediment’, Proc. Natl. Acad. Sci. USA., (2012) 109 No. 40.

D.T.F. Dryden et al., ‘How much of protein sequence space has been explored by life on Earth?’, Journal of the Royal Society Interface, (2008) 5, 953-956.

Darwinism against the house.

Probability Mistakes Darwinists Make


 Several years ago I delivered a lecture at the University of Maine, showing how advances in science increasingly point to an intelligent mind behind biological life. During the question period a professor in the audience conceded that the probability of evolution "discovering" an average globular protein is vanishingly small. Nonetheless, he insisted we are surrounded by endless examples of highly improbable events. For example, the exact combination of names and birthdates of the hundred or so people in the audience was also amazingly improbable. In the ensuing conversation, it became obvious that there was something about probabilities that he had not considered.


It only takes a few minutes of searching YouTube to confirm that numerous Darwinists commit the same mistake. In one example, a fellow randomly fills in a grid of 10 columns and 10 rows with 100 symbols. Then, he states that the probability of getting that exact combination is 1 chance in 10^157 -- yet he just accomplished this astonishing feat. In another clip, a man shuffles a deck of cards, spreads them out on a table, then repeats this two more times. He states that the probability of getting that exact triple combination of cards is roughly 1 chance in 10^204 -- yet he just did it. Both scenarios are supposed to prove there is nothing special about the probability of evolution "discovering" the sequence for a novel protein family with stable, 3D structures. Ironically, these examples demonstrate a profound ignorance of the problem.
In clearing up misconceptions that Darwinists promote, the first step is to clarify what scientists speak of when they discuss the infinitesimal probability of evolution "discovering" a sequence for a novel protein. That probability is found embedded within an equation published by Hazen et al.1:
I(Ex) = -log2 [M(Ex)/N]where
I(Ex) = the information required to code for a functional sequence within protein family, andM(Ex) = the total number of sequences that are functional, andN = the total number of possible sequences, functional and non-functional.Hazen's equation has two unknowns for protein families: I(Ex) and M(Ex). However, I have published a method2 to solve for a minimum value of I(Ex) using actual data from the Protein Family database (Pfam)3, and have made this software publicly available. We can then solve for M(Ex).
Now, back to the question of what type of probability scientists are interested in. The answer is M(Ex)/N. This ratio gives us the probability of finding a functional sequence from a pool of N possibilities in a single trial. To clarify, we are not interested in the probability of getting a specific sequence; any functional sequence will do just fine. Armed with this information, let us see what M(Ex)/N is for the Darwinist/YouTube examples given above.
In the first video, the total number of possibilities is N = 10^157, but what is M(Ex)? In this case, any sequence of symbols would have served as an example. Therefore, M(Ex) = N. The probability M(Ex)/N of obtaining a sequence that serves the purpose is therefore 1. Using Hazen's equation, the functional information required to randomly place the 100 symbols in the grid is 0 bits.
In the second example, the narrator shuffles 52 cards three successive times, then claims the total number of possibilities is N = 10^204. The real question is, What is M(Ex)? How many other sequences of shuffled cards would have served this function? Not surprisingly, any sequence would have sufficed -- again, M(Ex) = N. The probability M(Ex)/N of obtaining three series of card sequences that serves this purpose is exactly 1.
For my lecture at the University of Maine, any combination of people would have been fine so, again, M(Ex) = N and M(Ex)/N = 1.
Now let us do the same thing for a protein, using data from the Pfam database.
I downloaded 16,267 sequences from Pfam for the AA permease protein family. After stripping out the duplicates, 11,056 unique sequences for AA Permease remained. After running the resulting multiple sequence alignment through the software I mentioned earlier, the results showed that a minimum of 466 bits of functional information are required to code for AA permease. Using Hazen's equation to solve for M(Ex), we find that M(Ex)/N < 10^-140 where N = 20^433. The extreme upper limit for the total number of functional sequences for AA permease is M(Ex) = 10^97 functional sequences. The actual value for M(Ex) is certain to be numerous orders of magnitude smaller, due to site interdependencies as explained in my paper2.
So what do we see? In a single trial, the probability of obtaining a functional sequence by randomly sequencing codons is pretty much 0. Conversely, the probability of evolution producing a non-functional protein is very close to 1. Therefore, we can predict that evolution will readily produce de novo genes that fail to give functional, stable 3D structures. Clearly, the Darwinists on YouTube ignore this problem in protein science. If you estimate the extreme upper limit for the total number of mutation events in the entire history of life, using 10^30 life forms, a fast mutation rate, large genome size, and fast replication rate, it is less than 10^43 . Not surprisingly, this is pathetically underpowered for locating proteins where only 1 in 10^140 sequences is functional. However, it gets far worse, for evolution must "find" thousands of them.
Nonetheless, scientific literature reveals an unshakable belief that evolution can do the wildest, most improbable things tens of thousands of times over. Consequently, I believe Darwinism has become a religion, specifically a modern form of pantheism, where nature performs thousands of miracles -- none of which can be reproduced in a lab. On the other hand, if we apply a scientific method to detect intelligent design discussed here, we see that 433 bits of information is a strong marker of an intelligent origin. This test for intelligent design reveals the most rational position to take is that the genomes of life contain digital information from an intelligent source.
In a future post, I plan to examine the Darwinists' assumption that if the sequence is assembled step by step, it is much more probable.
References:
(1) Hazen et al., "Functional information and the emergence of biocomplexity," PNAS, 2007 May 15: 104:. suppl 1.
(2) Durston et al., "Measuring the functional sequence complexity of proteins," Theor Biol Med Model, 2007 Dec. 6;4:47.
(3) The Pfam protein families database: towards a more sustainable future: R.D. Finn, P. Coggill, R.Y. Eberhardt, S.R. Eddy, J. Mistry, A.L. Mitchell, S.C. Potter, M. Punta, M. Qureshi, A. Sangrador-Vegas, G.A. Salazar, J. Tate, A. BatemanNucleic Acids Research (2016) Database Issue 44:D279-D285.