Search This Blog

Sunday, 30 April 2017

Examining a challenge to Life's no free lunch law

The GUC Bug
Winston Ewert December 4, 2015 10:55 AM

In a series of posts, of which this is the third, I am examining criticisms from Joe Felsenstein (University of Washington geneticist) and Tom English (Oklahoma City computer scientist) in response to two arguments for intelligent design: specified complexity and conservation of information. See here for Parts 1 and 2 in the series.

In my previous post, I reviewed the arguments by William Dembski and Robert Marks in their paper "Life's Conservation Law." I showed that the paper is not based on any simplistic claim that all active information must derive from an intelligent source. However, it does argue that all known computer and mathematical models of Darwinian evolution are teleological. Dembski and Marks argued:

In these models, careful tailoring of fitness functions that assist in locating targets is always present and clearly teleological.

If one could demonstrate such a model that lacked teleology, then their claims would be falsified.

In a post at Panda's Thumb, "Fitness surfaces and searches: Dembski, Ewert, and Marks's search for design," Felsenstein and English spend some time discussing a simple greedy search algorithm, which they name the GUC (Greedy Uphill Climber) Bug. English, in his post "The Law of Conservation of Information is defunct," brings it up again. It is a fairly standard hill-climbing algorithm. It begins with a sequence of DNA one thousand bases long. In each "generation," it evaluates the three thousand DNA sequences that are one nucleotide substitution away from that current sequence. The best sequence is adopted as the new current sequence, and the process repeats itself.

English tested the GUC Bug on a random fitness function. It cannot possibly be argued that a random fitness function was carefully tailored to assist in locating a target. Thus, the success of this bug rests clearly on nonteleological grounds. They describe its performance:

Running the bug until it reached a local peak of the fitness surface, where no immediate neighbor is more fit, [Tom English] found that these peaks were typically higher than 99.98% of all points. So even on one of the worst possible fitness surfaces, a GUC Bug does far better than choosing a DNA sequence at random.

However, we have not claimed that a search algorithm like GUC can't do better than choosing a DNA sequence at random. In fact, Dembski and Marks showed that it could and provided a limit on the active information available through such a scheme. In "Conservation of Information in Search: Measuring the Cost of Success," they wrote:

Multiple queries clearly contain more information than a single query. Active information is therefore introduced from repeated queries.

Demonstrating an algorithm using multiple random queries that outperforms a single random query is not at all surprising. It is precisely what Dembski and Marks indicated would happen. The idea that doing better than choosing a DNA sequence at random would prove our case incorrect derives from the mistaken claim that we think all active information must derive from an intelligent source.

However, does this GUC Bug constitute a nonteleological model of Darwinian evolution, which Dembski and Marks claimed (in "Life's Conservation Law") does not exist? No. It is not a model of Darwinian evolution because it cannot do what is required of Darwinian evolution. Darwinian evolution has to account for finding rare protein folds and complex functional systems. The GUC Bug, operating on a random fitness landscape, does not even come close.

The GUC Bug finds a sequence better than 99.98% of all other sequences, which may sound impressive. But consider, as do English and Felsenstein, this algorithm running for fifty generations. That corresponds to 150,000 different sequences. If we simply took 150,000 random genotypes, we'd expect to find one better than about 99.999% of all the other genotypes. The GUC Bug does worse than random queries, due to getting stuck in a local optimum rather quickly. I hardly need to rehearse the insufficiency of even large numbers of random queries to solve biological problems. This model will have an even harder time solving problems.

However, Felsenstein and English note that a more realistic model of evolution wouldn't have a random fitness landscape. Felsenstein, in particular, argues that "the ordinary laws of physics, with their weakness of long-range interactions, lead to fitness surfaces much smoother than white-noise fitness surfaces." I agree that weak long-range interactions should produce a fitness landscape somewhat smoother than random chance and this fitness landscape would thus be a source of some active information.

We disagree in that I do not think that is going to be a sufficient source of active information to account for biology. I do not have a proof of this. But neither does Felsenstein have a demonstration that it will produce sufficient active information. What I do have is the observation of existing models of evolution. The smoothness present in those models does not derive from some notion of weak long-range physics, but rather from telelogy as explored in my various papers on them.

The GUC Bug falls within the expectations of active information. It extracts active information through repeated queries. Running on a random fitness landscape, it fails to be a model of evolution, because it performs even worse than random search would have. If run on a smooth landscape, it may be a model of Darwinian evolution. However, in order for it to be a non-teleological model of evolution, that fitness landscape would have to be derived in a non-teleological fashion. It remains to be demonstrated that it is possible to construct such a fitness landscape. Thus far, models of evolution have consistently devised the fitness landscape in a teleological fashion.

No comments:

Post a Comment