Search This Blog

Sunday 13 November 2016

Exploring the edge of Darwin's world.

Best of Behe: A Quick Reprise of The Edge of Evolution
Michael Behe

Editor's note: In celebration of the 20th anniversary of biochemist Michael Behe's pathbreaking book  Darwin's Black Box  and the release of the new documentary Revolutionary: Michael Behe and the Mystery of Molecular Machines , we are highlighting some of Behe's "greatest hits." The following was published here on August 20, 2014. Remember to get your copy of Revolutionary  now! See the trailer here.

On his blog, Sandwalk, University of Toronto biochemistry professor Laurence Moran expressed uncertainty concerning the basic argument of my book  The Edge of Evolution: The Search for the Limits of Darwinism. So for anyone who wants a quick reprise of the book's reasoning, below is a list of annotated bullet points plus some commentary summarizing it.

If the development of some particular adaptive biochemical feature requires more than one specific mutation to an organism's genome, and if the intermediate mutations are deleterious (and to a lesser extent even if they are neutral), then the probability of the multiple mutations randomly arising in a population and co-existing in a single individual so as to confer the adaptation will be many orders of magnitude less than for cases in which a single mutation is required.

The decreased probability means either that a much larger population size of organisms would be required on average to produce the multiple mutations in the same amount of time as needed for a single mutation, or that for the same population size a multiple-mutation feature would be expected to require many more generations to appear than a single mutation one.

As a matter of simple population genetics theory, the two points above should be uncontroversial. Now let's look at some empirical data.

In The Edge of Evolution I cited the development of chloroquine resistance in the malaria parasite Plasmodium falciparum as a very likely real-life example of this phenomenon.  recent paper by Summers et al. confirms that two specific mutations are required to confer upon the protein PfCRT the ability to pump chloroquine, which is necessary but may not be sufficient for resistance in the wild.

The best estimate of the per-parasite occurrence of de novo resistance is  Nicholas White's value of 1 in 1020. This number is surely made up of several components, including: 1) the probability of the two required mutations identified by Summers et al. coexisting in a single pfcrt gene; 2) the value of the selection coefficient (which can be thought of as the likelihood that the de novo mutant will successfully recrudesce in a person treated by chloroquine and be transmitted to another person); and 3) the probability of any possible further PfCRT mutation needed to confer chloroquine resistance in the wild coexisting in the same gene with the other mutations.

The known point mutation rate of P. falciparum, combined with the apparent deleterious effect of the required mutations occurring singly, suggests that component 1 from the previous bullet point will account for the lion's share of White's estimate, probably at least a factor of 1 in 1015-1016 of it. The other factors would then account for 1 in 104-105. These values are somewhat flexible, accommodating the uncertainty in our knowledge of the exact values in the wild. In other words, a decrease in our best estimate of the value of one factor can be conceptually offset relatively easily without affecting the argument by supposing another factor is larger, to arrive at 1 in 1020.

These last three points, although based on inferences from empirical data rather than just pure theory, should also be pretty uncontroversial. Now let's pass on to the dicier stuff.

Any particular adaptive biochemical feature requiring the same mutational complexity as that needed for chloroquine resistance in malaria is forbiddingly unlikely to have arisen by Darwinian processes and fixed in the population of any class of large animals (such as, say, mammals), because of the much lower population sizes and longer generation times compared to that of malaria. (By "the same mutational complexity" I mean requiring 2-3 point mutations where at least one step consists of intermediates that are deleterious, plus a modest selection coefficient of, say, 1 in 103 to 1 in104. Those factors will get you in the neighborhood of 1 in 1020.)

Any adaptive biological feature requiring a mutational pathway of twice that complexity (that is, 4-6 mutations with the intermediate steps being deleterious) is unlikely to have arisen by Darwinian processes during the history of life on Earth.

In the book I then go on to make a general argument that Darwinian processes could not have constructed the molecular foundation of life, but let's leave that aside for now. Let's just concentrate on the last two bullet points here.

Considered in the calmer context of the development of resistance to particular antibiotics (such as, say, a combination of chloroquine plus a second drug that is as difficult to evolve resistance to and works by an independent mechanism) -- rather than in the highly charged context of intelligent design -- even these two statements should seem reasonable to critics of ID. After all, many medical professionals searching for treatments for malaria are trying to do exactly that -- to combine two very improbable mutational steps into an insuperable mutational pathway. If there were a second drug with the efficacy of chloroquine which had always been administered in combination with it (but worked by a different mechanism), resistance to the combination would be expected to arise with a frequency in the neighborhood of 1 in 1040 -- a medical triumph.

Where a critic might demur is on the question of how many ways exist to solve an evolutionary problem of that mutational complexity. I think that's due to a confusion about the need for particular mutations versus nonspecific mutations. While comparing the math of chloroquine resistance to mutations that have occurred in the primate line leading to humans, Professor Moran wrote, "Does he really mean that there can't be any examples of two mutations occurring in the same gene since humans and chimps diverged?" No, of course not. That overlooks the requirement for the great specificity needed to build biochemical systems. For example, to achieve chloroquine resistance malaria must at least acquire the mutations K76T plus either N75E or N326D in PfCRT -- two very particular amino acid positions in a very particular gene -- not just any two amino acids in any gene. That of course makes a huge difference to the probability.

Moran also writes, "He seems to think that whenever we see such mutations they must have been the only possible way to evolve some new function or feature." Well, no, not the "only possible" way, but, yes, one of a very limited number of possibilities. (I wrote about this in my last article, too.)

In fact the number is limited enough that we can conclude with confidence that it won't affect my argument summarized above. For example, suppose there were ten, or a hundred different ways to address a particular biochemical challenge. That would barely move the dial on a log scale that's pointing at 1 in 1020.

What's more, Nicholas White's factor of 1 in 1020 already has built into it all the ways to evolve chloroquine resistance in P. falciparum. In the many malarial cells exposed to chloroquine there have surely occurred all possible single mutations and probably all possible double mutations -- in every malarial gene -- yet only a few mutational combinations in pfcrt are effective. In other words, mutation and selection have already searched all possible solutions of the entire genome whose probability is greater than 1 in 1020, including mutations to other genes. The observational evidence demonstrates that only a handful are effective. There is no justification for arbitrarily inflating probabilistic resources by citing imaginary alternative evolutionary routes.

To summarize, my argument concerns the evolutionary construction of biochemical features of specificity similar to malarial chloroquine resistance. The little-appreciated point I wanted to emphasize is that the likelihood of success decreases enormously if even a single mutational step of a pathway is disfavored. With more such steps, its improbability becomes prohibitive.


Saturday 12 November 2016

On the echo chamber called settled science II

Crowd Effect: Evolution Stays Afloat by Dint of Sociology
David Klinghoffer


It's the crowd effect: Certainties like Darwinian evolution stay afloat by dint of sociology as much as science or anything else. On a new episode of ID the Future with Andrew McDiarmid, protein chemist Douglas Axe talks about his new book Undeniable:  Undeniable: How Biology Confirms Our Intuition That Life Is Designed and the illusion that just because a lot of people say something, that makes it true:

People tend to follow other people. That's the way we are as humans. So there've been lots of ideas through the course of human history, big ideas, that get a big following, that are not true ideas. They turn out not to be correct. They're false. But they'll still have years and years and years, sometimes generations and generations of followers. Whenever you have an idea like that, it generates a huge volume of literature. But the mere fact that there's literature does not prove the idea. It simply proves that lots of people buy into the idea.

A great point to keep in mind and very well articulated. He draws the related distinction between scientific authority and scientific evidence. Support by "authority" for an idea does not mean that the evidence supports it too.

As he notes, Axe reached his own conclusion about intelligent design over the course of 25 years of "hard technical work" in science labs. However he wanted to reach not just other scientists -- but everyone. So Dr. Axe, rather than merely simplifying scientific information, sets out an argument that is "non-technical" by its nature.

The book is one that only those who haven't read it can easily dismiss. It is in part a respectful but forceful argument with atheist philosopher Thomas Nagle, author of Mind & Cosmos: Why the Materialist Neo-Darwinian Conception of Nature is Almost Certainly False, which we've reflected on here at length in the past. Axe's biggest conclusion? While "Nagle wants there to be an impersonal force within nature that created us," Dr. Axe rejects this: "The knower who made life is not just some disembodied intelligence. This knower has to be a personal creator."

Well those are fighting words. Knee-jerk crowd followers will roll their eyes without weighing the argument Doug Axe makes. The thoughtful will read the book and consider for themselves.




Rise of the machines? III

Monday 7 November 2016

Butterfly mimicry v. Darwin

Butterfly Mimicry: A "Huge" Problem for Evolutionary Biology
Casey Luskin


Can Darwinian evolution explain the complex coloration patterns found in insects that led to biomimicry? According to an article published late last year in BioScience, Darwinian evolution faces "problems" that are "huge" when trying to account for the origin of biomimicry in butterflies:

The balance of Dazzled and Deceived focuses on the genetics and development of mimetic patterns, as revealed mostly through work with butterflies. The problems here are huge for evolutionary biologists. How does natural selection build a complex organism with all its integrated parts through fixation of random mutations? Butterfly mimicry has been a classic arena in which to tackle this problem precisely because the gambit is so obvious: To be a good mimic of another species requires many pattern elements of bars, lines, colors, and even wing shapes to change at once. Moreover, how can this process produce females that are perfect mimics and males that look nothing of the sort within a single species? These genetic requirements are seemingly at odds with our understanding of gradual evolutionary change and genes of small effect.
(Edmund D. Brodie III, "Butterflies and Battleships," a review of Dazzled and Deceived: Mimicry and Camouflage by Peter Forbes (Yale University Press, 2009)," BioScience, Vol. 60(10):850-851 (November, 2010).)

Perhaps it's appropriate that Brodie states that "dogma can be dangerous." Or maybe not, given that his explanation for the evolutionary origin of mimicry is nothing but vague:
Forbes takes us through the emergence of E. B. Ford's school of ecological genetics and the basement-made butterfly crosses that eventually began to illuminate the problem of linked-gene complexes ("supergenes"), sex-linked inheritance, and modifier genes. The answers to the mimicry paradox, preliminary as they are still, inform modern evolutionary-developmental studies in all species and have launched the current effort to map a number of butterfly genomes. These genomic excursions promise to uncover the genetic architecture of mimetic patterns in a variety of species and in doing so uncover the fundamental basis of adaptation and speciation.

If our understanding of the genetic basis for these mimetic patterns is still "preliminary," then it would seem we're even further away from understanding how they evolved. Apparently these "problems" that are "huge for evolutionary biologists" are not going away anytime soon.

on competing models of human ancestry.

In BIO-Complexity, a New Model for Human Ancestry
Ann Gauger 

In 2012 at a scientific conference I met a Swedish population geneticist named Ola Hössjer. He and I sat down in the lobby of the hotel where we were staying to discuss what kind of population genetics model might be possible to test whether humanity could have come from a single first pair of humans. The motivation for doing so was the repeated challenge from other population geneticists claiming that we humans had to come from a population of thousands, not just two. He and I both knew the assumptions that had to go into the models such population geneticists constructed, and wondered if different starting assumptions would yield different results.

Population genetics is a field that uses math to model how genes and mutations are distributed in populations and how that distribution changes over time. It can be used to model our ancestry, to reconstruct our genetic history as a population as a whole, or as smaller subgroupings such European, Asian, African, etc., or even tribes within a larger grouping. The standard population genetics models that reconstruct ancestral history work backward by a process of coalescence to a starting point where everything is identical -- everything starts out the same, with one set of chromosomes, and diverges from there by the accumulation of mutations, and the processes of recombination, genetic drift, and natural selection.

Back in that hotel lobby, Ola and I quickly came up with a list of variables that would need to be accounted for in any model, things that are unknown aspects of the history of our origin, and we talked about the computational problems of any forward-looking model, one that goes from two individuals at the start to something like the present population. To keep track of all the variables and to trace the possible genetic changes quickly becomes computationally too intense to go very far. I personally thought such a model was intractable and beyond anyone's ability to build. Was I wrong!

A little over a year ago Ola presented a model to our now co-author Colin Reeves and me that took all those variables we had discussed in Copenhagen into account. It is the most comprehensive population genetics model I have seen anywhere -- it's a brilliant piece of work. Ola found a way to solve the problem of the explosive nature of forward-directed models I mentioned above. He uses the same coalescent technique of the standard models to reconstruct an ancestral tree from a few thousand individuals in the present time, going backward to a starting point of two. His model then reverses the process by going forward in time, using the tree as a framework to keep track of genetic changes.

The model is general enough that it can be of use to any geneticist to test the effects on genetic diversity of processes such as migration, age structure, mating behavior, and other aspects of population dynamics and demography.

The key assumption that distinguishes our model from the standard ones is that we assume that the first pair started out with heterogeneous chromosomes -- four distinct sets, two sets for each individual. The standard population genetics models work backward assuming everything starts from a single point. We are proposing that things started out different, not the same, with diversity present from the beginning in the genomes of the starting first pair.

We still need to code this model, which is a work in progress being done by Colin Reeves, and we hope others as well, as it is a massive project, and will require time and resources. But when it's completed, we will be able to test the hypothesis that we can recreate modern genetic diversity starting from an original pair with original genetic diversity. Should we be able to demonstrate this, there will be two competing models for human origins, one that says we came from a population of thousands, and ours that says we came from a population of two. We will see which best fits the available data and yields the most insight.


The model has now been published in the journal BIO-Complexity, in two parts, the first being a general introduction to population genetics and the rationale for the model, and the second being the model itself. My hope is that this model will be the catalyst for much research and discussion, on both sides.

The making of an antidarwinian bomb thrower.

Behe -- The Makings of a Revolutionary
David Klinghoffer





It's a pleasing coincidence that in the new documentary Revolutionary we look back to the origins of Michael Behe's insights on irreducible complexity, published twenty years ago in Darwin's Black Box, just as we look forward to the results of the potentially historic Royal Society meeting in London, underway at this moment.

No one scheduled to speak there is an advocate of intelligent design, but the scientific critique of Darwinism that Dr. Behe was crucial in launching has that meeting, at least in part, as its fruit.

Would such a conference, raising basic questions about the adequacy of neo-Darwinism, be happening now if it weren't for Behe, and before him Denton and Johnson? There's reason to wonder.

In a new brief video, above, Behe discusses the roots of his thinking, including a Science article that warned professors to warn their students against Phil Johnson's book Darwin on Trial, prompting a stinging missive from Behe that Science published in a subsequent issue.


Stephen Meyer and Paul Nelson recall their responses to meeting and working with Behe back in the early Nineties, as the revolution was just getting under way. Not that long ago, but how things have changed! Get your copy of Revolutionary, on DVD or Blu-ray, today.

Sunday 6 November 2016

File under "Well said" XLI

The end of life is to be like God, and the soul following God will be like Him. Socrates

A clash of Titans XXXVI.

Is big pharma why you can't afford quality care?:Pros and cons

For undocumented aliens the open hand or the boot?:Pros and cons

Just enough religion to make us hate? II

The state's all seeing eye?

Will Justice finally come to Jehovah's servants in South Korea?

South Korea Appellate Court Finds Conscientious Objectors Not Guilty

On October 18, 2016, the appellate division of Gwangju District Court held that conscientious objectors Hye-min Kim, Lak-hoon Cho, and Hyeong-geun Kim are not guilty of evading military service. These three men, all of whom are Jehovah’s Witnesses, are the first to receive a not-guilty decision on this issue at the appellate court level in South Korea.

Judge Young-sik Kim explained: “The Court believes that they refuse to perform military duty because of their religious faith and conscience. Freedom of religion and conscience are constitutional rights, which are not subject to restriction by punishment.”


If the prosecutor appeals against this ruling, the case will go before the Supreme Court for review. Over 40 cases are already pending before the Supreme Court, involving men who have been declared guilty on this issue. Philip Brumley, General Counsel of Jehovah’s Witnesses, stated: “Even though South Korea’s Supreme Court and Constitutional Court have until now refused to recognize the right to conscientious objection, the appellate court has applied the international standard recognizing the right. This recognition has been confirmed in over 500 rulings by the UN Human Rights Committee.”

Just being good neighbours.

Witnesses Repair 60-Year-Old Dam in Warwick

NEW YORK—Jehovah’s Witnesses finished the construction of their new world headquarters in Warwick, New York, in August 2016. In conjunction with the construction plan, the Witnesses have also completed the rehabilitation of the Blue Lake Dam, with the assistance of SUEZ Water New York Inc.


Soon after purchasing the property in 2009, the Witnesses made plans to upgrade the severely deteriorated Blue Lake Dam. The dam (pictured above, in foreground) is adjacent to the Witnesses’ new world headquarters and impounds Blue Lake (also known as Sterling Forest Lake). A subsequent inspection by the Department of Environmental Conservation (DEC) confirmed that the dam was leaking water and the gate valve was inoperable. With the 195 homes of The Woodlands at Tuxedo subdivision less than a mile from Blue Lake, the DEC classified the dam as high-hazard.



The Witnesses’ new world headquarters (bottom left) is adjacent to Blue Lake Dam (center right).

“The dam was clearly leaking,” explains Jeffrey Hutchinson, park manager of Sterling Forest State Park at the time, “and if it decided to go, there could have been some serious consequences. All of the homes down at the Woodlands subdivision, or a majority of them, could have been wiped out.”

Robert R. Werner, president of The Woodlands at Tuxedo Homeowners Association, states: “It is clear to me that had Jehovah’s Witnesses not stepped up, this project would not have been done until a failure of the structure had occurred. If that had happened, we would have had potential loss of life and property.”


The Woodlands at Tuxedo subdivision (top left) is less than a mile from the world headquarters of Jehovah’s Witnesses (bottom right) and Blue Lake Dam.
“In 2011, the Echo Lake Dam, located less than 30 miles from Blue Lake, failed and wiped out portions of the East Village in Tuxedo, New York,” comments Mr. Hutchinson. Town engineers estimated that 100 million gallons of water rushed down Ramapo River when the Echo Lake Dam gave way. Echo Lake has a surface area of 13 acres (5.2 ha)—less than one eighth the 115-acre (46.5 ha) surface area of Blue Lake.

The Blue Lake Dam, created in 1956, is located on the eastern side of the lake and originally comprised two parts: a main earth dam and a primary concrete spillway. A safety valve, used to lower the water level in the event of an emergency, was also installed at the bottom of the lake.

Richard Devine, chairman of the Witnesses’ Warwick Construction Project Committee, explains: “We are happy that the dam rehabilitation project was successful, and we certainly appreciated the assistance from SUEZ Water. Our construction team shored up the dam, replaced the old, broken valve, added an auxiliary spillway, and raised and fortified the primary concrete spillway. The completion of the project ensures that the dam now meets the codes and industry standards for safety.”

Mr. Hutchinson sums up his overall view of the Witnesses and their work on the project: “You people do a lot of good for the local communities and help out where you can. Your construction quality is first-rate and environmentally conscious.”

Media Contact:


David A. Semonian, Office of Public Information, 1-718-560-5000


On the no free lunch principle re:information.

Conservation of Information Made Simple
William A. Dembski August 28, 2012 3:59 PM 

In the 1970s, Doubleday published a series of books with the title "Made Simple." This series covered a variety of academic topics (Statistics Made Simple, Philosophy Made Simple, etc.). The 1980s saw the "For Dummies" series, which expanded the range of topics to include practical matters such as auto repair. The "For Dummies" series has since been replicated, notably by guides for "Complete Idiots." All books in these series attempt, with varying degrees of success, to break down complex subjects, helping students to learn a topic, especially when they've been stymied by more conventional approaches and textbooks. 

In this article, I'm going to follow the example of these books, laying out as simply and clearly as I can what conservation of information is and why it poses a challenge to conventional evolutionary thinking. I'll break this concept down so that it seems natural and straightforward. Right now, it's too easy for critics of intelligent design to say, "Oh, that conservation of information stuff is just mumbo-jumbo. It's part of the ID agenda to make a gullible public think there's some science backing ID when it's really all smoke and mirrors." Conservation of information is not a difficult concept and once it is understood, it becomes clear that evolutionary processes cannot create the information required to power biological evolution.

Conservation of Information: A Brief History

Conservation of information is a term with a short history. Biologist Peter Medawar used it in the 1980s to refer to mathematical and computational systems that are limited to producing logical consequences from a given set of axioms or starting points, and thus can create no novel information (everything in the consequences is already implicit in the starting points). His use of the term is the first that I know, though the idea he captured with it is much older. Note that he called it the "Law of Conservation of Information" (see his The Limits of Science, 1984).

Computer scientist Tom English, in a 1996 paper, also used the term conservation of information, though synonymously with the then recently proved results by Wolpert and Macready about No Free Lunch (NFL). In English's version of NFL, "the information an optimizer gains about unobserved values is ultimately due to its prior information of value distributions." As with Medawar's form of conservation of information, information for English is not created from scratch but rather redistributed from existing sources.

Conservation of information, as the idea is being developed and gaining currency in the intelligent design community, is principally the work of Bob Marks and myself, along with several of Bob's students at Baylor (see the publications page at www.evoinfo.org). Conservation of information, as we use the term, applies to search. Now search may seem like a fairly restricted topic. Unlike conservation of energy, which applies at all scales and dimensions of the universe, conservation of information, in focusing on search, may seem to have only limited physical significance. But in fact, conservation of information is deeply embedded in the fabric of nature, and the term does not misrepresent its own importance.

Search is a very general phenomenon. The reason we don't typically think of search in broad terms applicable to nature generally is that we tend to think of it narrowly in terms of finding a particular predefined object. Thus our stock example of search is losing one's keys, with search then being the attempt to recover them. But we can also search for things that are not pre-given in this way. Sixteenth-century explorers were looking for new, uncharted lands. They knew when they found them that their search had been successful, but they didn't know exactly what they were looking for. U2 has a song titled "I Still Haven't Found What I'm Looking For." How will Bono know once he's found what he's looking for? Often we know that we've found it even though it's nothing like what we expected, and sometimes even violates our expectations.

Another problem with extending search to nature in general is that we tend to think of search as confined to human contexts. Humans search for keys, and humans search for uncharted lands. But, as it turns out, nature is also quite capable of search. Go to Google and search on the term "evolutionary search," and you'll get quite a few hits. Evolution, according to some theoretical biologists, such as Stuart Kauffman, may properly be conceived as a search (see his book Investigations). Kauffman is not an ID guy, so there's no human or human-like intelligence behind evolutionary search as far as he's concerned. Nonetheless, for Kauffman, nature, in powering the evolutionary process, is engaged in a search through biological configuration space, searching for and finding ever-increasing orders of biological complexity and diversity.

An Age of Search

Evolutionary search is not confined to biology but also takes place inside computers. The field of evolutionary computing (which includes genetic algorithms) falls broadly under that area of mathematics known as operations research, whose principal focus is mathematical optimization. Mathematical optimization is about finding solutions to problems where the solutions admit varying and measurable degrees of goodness (optimality). Evolutionary computing fits this mold, seeking items in a search space that achieve a certain level of fitness. These are the optimal solutions. (By the way, the irony of doing a Google "search" on the target phrase "evolutionary search," described in the previous paragraph, did not escape me. Google's entire business is predicated on performing optimal searches, where optimality is gauged in terms of the link structure of the web. We live in an age of search!)

If the possibilities connected with search now seem greater to you than they have in the past, extending beyond humans to computers and biology in general, they may still seem limited in that physics appears to know nothing of search. But is this true? The physical world is life-permitting -- its structure and laws allow (though they are far from necessitating) the existence of not just cellular life but also intelligent multicellular life. For the physical world to be life-permitting in this way, its laws and fundamental constants need to be configured in very precise ways. Moreover, it seems far from mandatory that those laws and constants had to take the precise form that they do. The universe itself, therefore, can be viewed as the solution to the problem of making life possible. But problem solving itself is a form of search, namely, finding the solution (among a range of candidates) to the problem.

Still, for many scientists, search fits uneasily in the natural sciences. Something unavoidably subjective and teleological seems involved in search. Search always involves a goal or objective, as well as criteria of success and failure (as judged by what or whom?) depending on whether and to what degree the objective has been met. Where does that objective, typically known as a target, come from other than from the minds of human inquirers? Are we, as pattern-seeking and pattern-inventing animals, simply imposing these targets/patterns on nature even though they have no independent, objective status?

This concern has merit, but it needs not to be overblown. If we don't presuppose a materialist metaphysics that makes mind, intelligence, and agency an emergent property of suitably organized matter, then it is an open question whether search and the teleology inherent in it are mere human constructions on the one hand, or, instead, realities embedded in nature on the other. What if nature is itself the product of mind and the patterns it exhibits reflect solutions to search problems formulated by such a mind?

Scientific inquiry that's free of prejudice and narrowly held metaphysical assumptions should, it seems, leave open both these possibilities. After all, the patterns we're talking about are not like finding a vague likeness of Santa Claus's beard in a cloud formation. Who, if they look hard enough, won't see Santa's beard? The fine-tuning of nature's laws and constants that permits life to exist at all is not like this. It is a remarkable pattern and may properly be regarded as the solution to a search problem as well as a fundamental feature of nature, or what philosophers would call a natural kind, and not merely a human construct. Whether an intelligence is responsible for the success of this search is a separate question. The standard materialist line in response to such cosmological fine-tuning is to invoke multiple universes and view the success of this search as a selection effect: most searches ended without a life-permitting universe, but we happened to get lucky and live in a universe hospitable to life.

In any case, it's possible to characterize search in a way that leaves the role of teleology and intelligence open without either presupposing them or deciding against them in advance. Mathematically speaking, search always occurs against a backdrop of possibilities (the search space), with the search being for a subset within this backdrop of possibilities (known as the target). Success and failure of search are then characterized in terms of a probability distribution over this backdrop of possibilities, the probability of success increasing to the degree that the probability of locating the target increases.

For example, consider all possible L-amino acid sequences joined by peptide bonds of length 100. This we can take as our reference class or backdrop of possibilities -- our search space. Within this class, consider those sequences that fold and thus might form a functioning protein. This, let us say, is the target. This target is not merely a human construct. Nature itself has identified this target as a precondition for life -- no living thing that we know can exist without proteins. Moreover, this target admits some probabilistic estimates. Beginning with the work of Robert Sauer, cassette mutagenesis and other experiments of this sort performed over the last three decades suggest that the target has probability no more than 1 in 10^60 (assuming a uniform probability distribution over all amino acid sequences in the reference class).

The mathematics characterizing search in this way is straightforward and general. Whether in specific situations a search so characterized also involves unavoidably subjective human elements or reflects objectively given realities embedded in nature can be argued independently of the mathematics. Such an argument speaks to the interpretation of the search, not to the search itself. Such an argument parallels controversies surrounding the interpretation of quantum mechanics: whether quantum mechanics is inherently a mind-based, observer-dependent theory; whether it can be developed independently of observers; whether it is properly construed as reflecting a deterministic, mind-independent, multiuniverse, etc. Quantum mechanics itself is a single, well-defined theory that admits several formulations, all of which are mathematically equivalent. Likewise, search as described here has a single, straightforward theoretical underpinning.

An Easter Egg Hunt, from the Scientific Vantage

One clarification is worth inserting here while we're still setting the stage for conservation of information. For most people, when it comes to search, the important thing is the outcome of the search. Take an Easter egg hunt. The children looking for Easter eggs are concerned with whether they find the eggs. From the scientific vantage, however, the important thing about search is not the particular outcomes but the probability distribution over the full range of possible outcomes in the search space (this parallels communication theory, in which what's of interest is not particular messages sent across a communication channel but the range of possible messages and their probability distribution). The problem with just looking at outcomes is that a search might get lucky and find the target even if the probabilities are against it.

Take an Easter egg hunt in which there's just one egg carefully hidden somewhere in a vast area. This is the target and blind search is highly unlikely to find it precisely because the search space is so vast. But there's still a positive probability of finding the egg even with blind search, and if the egg is discovered, then that's just how it is. It may be, because the egg's discovery is so improbable, that we might question whether the search was truly blind and therefore reject this (null) hypothesis. Maybe it was a guided search in which someone, with knowledge of the egg's whereabouts, told the seeker "warm, warmer, no colder, warmer, warmer, hot, hotter, you're burning up." Such guidance gives the seeker added information that, if the information is accurate, will help locate the egg with much higher probability than mere blind search -- this added information changes the probability distribution.

But again, the important issue, from a scientific vantage, is not how the search ended but the probability distribution under which the search was conducted. You don't have to be a scientist to appreciate this point. Suppose you've got a serious medical condition that requires treatment. Let's say there are two treatment options. Which option will you go with? Leaving cost and discomfort aside, you'll want the treatment with the better chance of success. This is the more effective treatment. Now, in particular circumstances, it may happen that the less effective treatment leads to a good outcome and the more effective treatment leads to a bad outcome. But that's after the fact. In deciding which treatment to take, you'll be a good scientist and go with the one that has the higher probability of success.

The Easter egg hunt example provides a little preview of conservation of information. Blind search, if the search space is too large and the number of Easter eggs is too small, is highly unlikely to successfully locate the eggs. A guided search, in which the seeker is given feedback about his search by being told when he's closer or farther from the egg, by contrast, promises to dramatically raise the probability of success of the search. The seeker is being given vital information bearing on the success of the search. But where did this information that gauges proximity of seeker to egg come from? Conservation of information claims that this information is itself as difficult to find as locating the egg by blind search, implying that the guided search is no better at finding the eggs than blind search once this information must be accounted for.

Conservation of Information in Evolutionary Biology

In the sequel, I will focus mainly on conservation of information as it applies to search in evolutionary biology (and by extension in evolutionary computing), trusting that once the case for conservation of information is made in biology, its scope and applicability for the rest of the natural sciences will be that much more readily accepted and acceptable. As it is, evolutionary biologists possessing the mathematical tools to understand search are typically happy to characterize evolution as a form of search. And even those with minimal knowledge of the relevant mathematics fall into this way of thinking.

Take Brown University's Kenneth Miller, a cell biologist whose knowledge of the relevant mathematics I don't know. Miller, in attempting to refute ID, regularly describes examples of experiments in which some biological structure is knocked out along with its function, and then, under selection pressure, a replacement structure is evolved that recovers the function. What makes these experiments significant for Miller is that they are readily replicable, which means that the same systems with the same knockouts will undergo the same recovery under the same suitable selection regime. In our characterization of search, we would say the search for structures that recover function in these knockout experiments achieves success with high probability.

Suppose, to be a bit more concrete, we imagine a bacterium capable of producing a particular enzyme that allows it to live off a given food source. Next, we disable that enzyme, not by removing it entirely but by, say, changing a DNA base in the coding region for this protein, thus changing an amino acid in the enzyme and thereby drastically lowering its catalytic activity in processing the food source. Granted, this example is a bit stylized, but it captures the type of experiment Miller regularly cites.

So, taking these modified bacteria, the experimenter now subjects them to a selection regime that starts them off on a food source for which they don't need the enzyme that's been disabled. But, over time, they get more and more of the food source for which the enzyme is required and less and less of other food sources for which they don't need it. Under such a selection regime, the bacterium must either evolve the capability of processing the food for which previously it needed the enzyme, presumably by mutating the damaged DNA that originally coded for the enzyme and thereby recovering the enzyme, or starve and die.

So where's the problem for evolution in all this? Granted, the selection regime here is a case of artificial selection -- the experimenter is carefully controlling the bacterial environment, deciding which bacteria get to live or die. But nature seems quite capable of doing something similar. Nylon, for instance, is a synthetic product invented by humans in 1935, and thus was absent from bacteria for most of their history. And yet, bacteria have evolved the ability to digest nylon by developing the enzyme nylonase. Yes, these bacteria are gaining new information, but they are gaining it from their environments, environments that, presumably, need not be subject to intelligent guidance. No experimenter, applying artificial selection, for instance, set out to produce nylonase.

To see that there remains a problem for evolution in all this, we need to look more closely at the connection between search and information and how these concepts figure into a precise formulation of conservation of information. Once we have done this, we'll return to the Miller-type examples of evolution to see why evolutionary processes do not, and indeed cannot, create the information needed by biological systems. Most biological configuration spaces are so large and the targets they present are so small that blind search (which ultimately, on materialist principles, reduces to the jostling of life's molecular constituents through forces of attraction and repulsion) is highly unlikely to succeed. As a consequence, some alternative search is required if the target is to stand a reasonable chance of being located. Evolutionary processes driven by natural selection constitute such an alternative search. Yes, they do a much better job than blind search. But at a cost -- an informational cost, a cost these processes have to pay but which they are incapable of earning on their own.

In the information-theory literature, information is usually characterized as the negative logarithm to the base two of a probability (or some logarithmic average of probabilities, often referred to as entropy). This has the effect of transforming probabilities into bits and of allowing them to be added (like money) rather than multiplied (like probabilities). Thus, a probability of one-eighths, which corresponds to tossing three heads in a row with a fair coin, corresponds to three bits, which is the negative logarithm to the base two of one-eighths. Such a logarithmic transformation of probabilities is useful in communication theory, where what gets moved across communication channels is bits rather than probabilities and the drain on bandwidth is determined additively in terms of number of bits. Yet, for the purposes of this "Made Simple" paper, we can characterize information, as it relates to search, solely in terms of probabilities, also cashing out conservation of information purely probabilistically.

Probabilities, treated as information used to facilitate search, can be thought of in financial terms as a cost -- an information cost. Think of it this way. Suppose there's some event you want to have happen. If it's certain to happen (i.e., has probability 1), then you own that event -- it costs you nothing to make it happen. But suppose instead its probability of occurring is less than 1, let's say some probability p. This probability then measures a cost to you of making the event happen. The more improbable the event (i.e., the smaller p), the greater the cost. Sometimes you can't increase the probability of making the event occur all the way to 1, which would make it certain. Instead, you may have to settle for increasing the probability to qwhere qis less than 1 but greater than p. That increase, however, must also be paid for. And in fact, we do pay to raise probabilities all the time. For instance, many students pay tuition costs to obtain a degree that will improve their prospects (i.e., probabilities) of landing a good, high-paying job.

A Fair Lottery

To illustrate this point more precisely, imagine that you are playing a lottery. Let's say it's fair, so that the government doesn't skim anything off the top (i.e., everything paid into the lottery gets paid out to the winner) and one ticket is sure to be the winner. Let's say a million lottery tickets have been purchased so far at one dollar apiece, exactly one of which is yours. Each lottery ticket therefore has the same probability of winning, so your lottery ticket has a one in a million chance of coming out on top (which is your present p value), entailing a loss of one dollar if you lose and nearly a million dollars if you win ($999,999 to be exact). Now let's say you really want to win this lottery -- for whatever reason you earnestly desire to hold the winning ticket in your hand. In that case, you can purchase additional tickets. By purchasing these, you increase your chance of winning the lottery. Let's say you purchase an additional million tickets at one dollar apiece. Doing so has now boosted your probability of winning the lottery from .000001 to .500001, or to about one-half.

Increasing the probability of winning the lottery has therefore incurred a cost. With a probability of roughly .5 of winning the lottery, you are now much more likely to gain approximately one million dollars. But it also cost you a million dollars to increase your probability of winning. As a result, your expected winnings, computed in standard statistical terms as the probability of losing multiplied by what you would lose subtracted from the probability of winning multiplied by what you would win, equals zero. Moreover, because this is a fair lottery, it equals zero when you only had one ticket purchased and it equals zero when you had an additional million tickets purchased. Thus, in statistical terms, investing more in this lottery has gained you nothing.

Conservation of information is like this. Not exactly like this because conservation of information focuses on search whereas the previous example focused on the economics of expected utility. But just as increasing your chances of winning a lottery by buying more tickets offers no real gain (it is not a long-term strategy for increasing the money in your pocket), so conservation of information says that increasing the probability of successful search requires additional informational resources that, once the cost of locating them is factored in, do nothing to make the original search easier.

To see how this works, let's consider a toy problem. Imagine that your search space consists of only six items, labeled 1 through 6. Let's say your target is item 6 and that you're going to search this space by rolling a fair die once. If it lands on 6, your search is successful; otherwise, it's unsuccessful. So your probability of success is 1/6. Now let's say you want to increase the probability of success to 1/2. You therefore find a machine that flips a fair coin and delivers item 6 to you if it lands heads and delivers some other item in the search space if it land tails. What a great machine, you think. It significantly boosts the probability of obtaining item 6 (from 1/6 to 1/2).

But then a troubling question crosses your mind: Where did this machine that raises your probability of success come from? A machine that tosses a fair coin and that delivers item 6 if the coin lands heads and some other item in the search space if it lands tails is easily reconfigured. It can just as easily deliver item 5 if it lands heads and some other item if it lands tails. Likewise for all the remaining items in the search space: a machine such as the one described can privilege any one of the six items in the search space, delivering it with probability 1/2 at the expense of the others. So how did you get the machine that privileges item 6? Well, you had to search among all those machines that flip coins and with probability 1/2 deliver a given item, selecting the one that delivers item 6 when it lands heads. And what's the probability of finding such a machine?

To keep things simple, let's imagine that our machine delivers item 6 with probability 1/2 and each of items 1 through 5 with equal probability, that is, with probability 1/10. Accordingly, this machine is one of six possible machines configured in essentially the same way. There's another machine that flips a coin, delivers item 1 from the original search space if it lands heads, and delivers any one of 2 through 6 with probability 1/10 each if the coin lands tails. And so on. Thus, of these six machines, one delivers item 6 with probability 1/2 and the remaining five machines deliver item 6 with probability 1/10. Since there are six machines, only one of which delivers item 6 (our target) with high probability, and since only labels and no intrinsic property distinguishes one machine from any other in this setup (the machines are, as mathematicians would say, isomorphic), the principle of indifference applies to these machines and prescribes that the probability of getting the machine that delivers item 6 with probability 1/2 is the same as that of getting any other machine, and is therefore 1/6.

But a probability of 1/6 to find a machine that delivers item 6 with probability 1/2 is no better than our original probability of 1/6 of finding the target simply by tossing a die. In fact, once we have this machine, we still have only a 50-50 chance of locating item 6. Finding this machine incurs a probability cost of 1/6, and once this cost is incurred we still have a probability cost of 1/2 of finding item 6. Since probability costs increase as probabilities decrease, we're actually worse off than we were at the start, where we simply had to roll a die that, with probability 1/6, locates item 6.

The probability of finding item 6 using this machine, once we factor in the probabilistic cost of securing the machine, therefore ends up being 1/6 x 1/2 = 1/12. So our attempt to increase the probability of finding item 6 by locating a more effective search for that item has actually backfired, making it in the end even more improbable that we'll find item 6. Conservation of information says that this is always a danger when we try to increase the probability of success of a search -- that the search, instead of becoming easier, remains as difficult as before or may even, as in this example, become more difficult once additional underlying information costs, associated with improving the search and often hidden, as in this case by finding a suitable machine, are factored in.

Why It Is Called "Conservation" of Information

The reason it's called "conservation" of information is that the best we can do is break even, rendering the search no more difficult than before. In that case, information is actually conserved. Yet often, as in this example, we may actually do worse by trying to improve the probability of a successful search. Thus, we may introduce an alternative search that seems to improve on the original search but that, once the costs of obtaining this search are themselves factored in, in fact exacerbate the original search problem.

In referring to ease and difficulty of search, I'm not being mathematically imprecise. Ease and difficulty, characterized mathematically, are always complexity-theoretic notions presupposing an underlying complexity measure. In this case, complexity is cashed out probabilistically, so the complexity measure is a probability measure, with searches becoming easier to the degree that successfully locating targets is more probable, and searches becoming more difficult to the degree that successfully locating targets is more improbable. Accordingly, it also makes sense to talk about the cost of a search, with the cost going up the more difficult the search, and the cost going down the easier the search.

In all these discussions of conservation of information, there's always a more difficult search that gets displaced by an easier search, but once the difficulty of finding the easier search (difficulty being understood probabilistically) is factored in, there's no gain, and in fact the total cost may have gone up. In other words, the actual probability of locating the target with the easier search is no greater, and may actually be less, than the probability of locating the target with the more difficult search once the probability of locating the easier search is factored in. All of this admits a precise mathematical formulation. Inherent in such a formulation is treating search itself as subject to search. If this sounds self-referential, it is. But it also makes good sense.

To see this, consider a treasure hunt. Imagine searching for a treasure chest buried on a large island. We consider two searches, a more difficult one and an easier one. The more difficult search, in this case, is a blind search in which, without any knowledge of where the treasure is buried, you randomly meander about the island, digging here or there for the treasure. The easier search, by contrast, is to have a treasure map in which "x marks the spot" where the treasure is located, and where you simply follow the map to the treasure.

But where did you get that treasure map? Mapmakers have made lots of maps of that island, and for every map that accurately marks the treasure's location, there are many many others that incorrectly mark its location. Indeed, for any place on the island, there's a map that marks it with an "x." So how do you find your way among all these maps to one that correctly marks the treasure's location? Evidently, the search for the treasure has been displaced to a search for a map that locates the treasure. Each map corresponds to a search, and locating the right map corresponds to a search for a search (abbreviated, in the conservation of information literature, as S4S).

Conservation of information, in this example, says that the probability of locating the treasure by first searching for a treasure map that accurately identifies the treasure's location is no greater, and may be less, than the probability of locating the treasure simply by blind search. This implies that the easier search (i.e., the search with treasure map in hand), once the cost of finding it is factored in, has not made the actual overall search any easier. In general, conservation of information says that when a more difficult search gets displaced by an easier search, the probability of finding the target by first finding the easier search and then using the easier search to find the target is no greater, and often is less, than the probability of finding the target directly with the more difficult search.

In the Spirit of "No Free Lunch"

Anybody familiar with the No Free Lunch (NFL) theorems will immediately see that conservation of information is very much in the same spirit. The upshot of the NFL theorems is that no evolutionary search outperforms blind search once the information inherent in fitness (i.e., the fitness landscape) is factored out. NFL is a great equalizer. It says that all searches are essentially equivalent to blind search when looked at not from the vantage of finding a particular target but when averaged across the different possible targets that might be searched.

If NFL tends toward egalitarianism by arguing that no search is, in itself, better than blind search when the target is left unspecified, conservation of information tends toward elitism by making as its starting point that some searches are indeed better than others (especially blind search) at locating particular targets. Yet, conservation of information quickly adds that the elite status of such searches is not due to any inherent merit of the search (in line with NFL) but to information that the search is employing to boost its performance.

Some searches do better, indeed much better, than blind search, and when they do, it is because they are making use of target-specific information. Conservation of information calculates the information cost of this performance increase and shows how it must be counterbalanced by a loss in search performance elsewhere (specifically, by needing to search for the information that boosts search performance) so that global performance in locating the target is not improved and may in fact diminish.

Conservation of information, in focusing on search for the information needed to boost search performance, suggests a relational ontology between search and objects being searched. In a relational ontology, things are real not as isolated entities but in virtue of their relation to other things. In the relational ontology between search and the objects being searched, each finds its existence in the other. Our natural tendency is to think of objects as real and search for those objects as less real in the sense that search depends on the objects being searched but objects can exist independently of search. Yet objects never come to us in themselves but as patterned reflections of our background knowledge, and thus as a target of search.

Any scene, indeed any input to our senses, reaches our consciousness only by aspects becoming salient, and this happens because certain patterns in our background knowledge are matched to the exclusion of others. In an extension of George Berkeley's "to be is to be perceived," conservation of information suggests that "to be perceived is to be an object of search." By transitivity of reasoning, it would then follow that to be is to be an object of search. And since search is always search for an object, search and the object of search become, in this way of thinking, mutually ontologizing, giving existence to each other. Conservation of information then adds to this by saying that search can itself be an object of search.

Most relational ontologies are formulated in terms of causal accessibility, so that what renders one thing real is its causal accessibility to another thing. But since search is properly understood probabilistically, the form of accessibility relevant to a relational ontology grounded in search is probabilistic. Probabilistic rather than causal accessibility grounds the relational ontology of search. Think of a needle in a haystack, only imagine the needle is the size of an electron and the haystack is the size of the known physical universe. Searches with such a small probability of success via blind or random search are common in biology. Biological configuration spaces of possible genes and proteins, for instance, are immense, and finding a functional gene or protein in such spaces via blind search can be vastly more improbable than finding an arbitrary electron in the known physical universe.

Why the Multiverse Is Incoherent

Given needles this tiny in haystacks this large, blind search is effectively incapable of finding a needle in a haystack. Success, instead, requires a search that vastly increases the probability of finding the needle. But where does such a search come from? And in what sense does the needle exist apart from such a search. Without a search that renders finding the needle probable, the needle might just as well not exist. And indeed, we would in all probability not know that it exists except for a search that renders it probable. This, by the way, is why I regard the multiverse as incoherent: what renders the known physical universe knowable is that it is searchable. The multiverse, by contrast, is unsearchable. In a relational ontology that makes search as real as the objects searched, the multiverse is unreal.

These considerations are highly germane to evolutionary biology, which treats evolutionary search as a given, as something that does not call for explanation beyond the blind forces of nature. But insofar as evolutionary search renders aspects of a biological configuration space probabilistically accessible where previously, under blind search, they were probabilistically inaccessible, conservation of information says that evolutionary search achieves this increase in search performance at an informational cost. Accordingly, the evolutionary search, which improves on blind search, had to be found through a higher-order search (i.e., a search for a search, abbreviated S4S), which, when taken into account, does not make the evolutionary search any more effective at finding the target than the original blind search.

Given this background discussion and motivation, we are now in a position to give a reasonably precise formulation of conservation of information, namely: raising the probability of success of a search does nothing to make attaining the target easier, and may in fact make it more difficult, once the informational costs involved in raising the probability of success are taken into account. Search is costly, and the cost must be paid in terms of information. Searches achieve success not by creating information but by taking advantage of existing information. The information that leads to successful search admits no bargains, only apparent bargains that must be paid in full elsewhere.

For a "Made Simple" paper on conservation of information, this is about as much as I want to say regarding a precise statement of conservation of information. Bob Marks and I have proved several technical conservation of information theorems (see the publications page at www.evoinfo.org). Each of these looks at some particular mathematical model of search and shows how raising the probability of success of a search by a factor of q/p (> 1) incurs an information cost not less than log(q/p), or, equivalently, a probability cost of not more than p/q. If we therefore start with a search having probability of success p and then raise it to q, the actual probability of finding the target is not q but instead is less than or equal to q multiplied by p/q, or, therefore, less than or equal to p, which is just the original search difficulty. Accordingly, raising the probability of success of a search contributes nothing toward finding the target once the information cost of raising the probability is taken into account.

Conservation of information, however, is not just a theorem or family of theorems but also a general principle or law (recall Medawar's "Law of Conservation of Information"). Once enough such theorems have been proved and once their applicability to a wide range of search problems has been repeatedly demonstrated (the Evolutionary Informatics Lab has, for instance, shown how such widely touted evolutionary algorithms as AVIDA, ev, Tierra, and Dawkins's WEASEL all fail to create but instead merely redistribute information), conservation of information comes to be seen not as a narrow, isolated result but as a fundamental principle or law applicable to search in general. This is how we take conservation of information.

Instead of elaborating the underlying theoretical apparatus for conservation of information, which is solid and has appeared now in a number of peer-reviewed articles in the engineering and mathematics literature (see the publications page at www.evoinfo.org -- it's worth noting that none of the critiques of this work has appeared in the peer-reviewed scientific/engineering literature, although a few have appeared in the philosophy of science literature, such as Philosophy and Biology; most of the critiques are Internet diatribes), I want next to illustrate conservation of information as it applies to one of the key examples touted by evolutionists as demonstrating the information-generating powers of evolutionary processes. Once I've done that, I want to consider what light conservation of information casts on evolution generally.

An Economist Is Stranded on an Island

To set the stage, consider an old joke about an economist and several other scientists who are stranded on an island and discover a can of beans. Hungry, they want to open it. Each looks to his area of expertise to open the can. The physicist calculates the trajectory of a projectile that would open the can. The chemist calculates the heat from a fire needed to burst the can. And so on. Each comes up with a concrete way to open the can given the resources on the island. Except the economist. The economist's method of opening the can is the joke's punch line: suppose a can opener. There is, of course, no can opener on the island.

The joke implies that economists are notorious for making assumptions to which they are unentitled. I don't know enough about economists to know whether this is true, but I do know that this is the case for many evolutionary biologists. The humor in the economist's proposed solution of merely positing a can opener, besides its jab at the field of economics, is the bizarre image of a can opener coming to the rescue of starving castaways without any warrant whatsoever for its existence. The economist would simply have the can opener magically materialize. The can opener is, essentially, a deus ex machina.

Interestingly, the field of evolutionary biology is filled with deus ex machinas (yes, I've taken Latin and know that this is not the proper plural of deus ex machina, which is dei ex machinis; but this is a "made simple" paper meant for the unwashed masses, of which I'm a card-carrying member). Only the evolutionary biologist is a bit more devious about employing, or should I say deploying, deus ex machinas than the economist. Imagine our economist counseling someone who's having difficulty repaying a juice loan to organized crime. In line with the advice he gave on the island, our economist friend might give the following counsel: suppose $10,000 in cash.

$10,000 might indeed pay the juice loan, but that supposition seems a bit crude. An evolutionary biologist, to make his advice appear more plausible, would add a layer of complexity to it: suppose a key to a safety deposit box with $10,000 cash inside it. Such a key is just as much a deus ex machina as the $10,000 in cash. But evolutionary biology has long since gained mastery in deploying such devices as well as gaining the right to call their deployment "science."

I wish I were merely being facetious, but there's more truth here than meets the eye. Consider Richard Dawkins' well known METHINKS IT IS LIKE A WEASEL example (from his 1986 book The Blind Watchmaker), an example endlessly repeated and elaborated by biologists trying to make evolution seem plausible, the most notable recent rendition being by RNA-worlds researcher Michael Yarus in his 2010 book Life from an RNA World (Yarus's target phrase, unlike Dawkins's, which is drawn from Shakespeare's Hamlet, is Theodosius Dozhansky's famous dictum NOTHING IN BIOLOGY MAKES SENSE EXCEPT IN THE LIGHT OF EVOLUTION).

A historian or literature person, confronted with METHINKS IT IS LIKE A WEASEL, would be within his rights to say, suppose that there was a writer named William Shakespeare who wrote it. And since the person and work of Shakespeare have been controverted (was he really a she? did he exist at all? etc.), this supposition is not without content and merit. Indeed, historians and literature people make such suppositions all the time, and doing so is part of what they get paid for. Are the Homeric poems the result principally of a single poet, Homer, or an elaboration by a tradition of bards? Did Moses write the Pentateuch or is it the composite of several textual traditions, as in the documentary hypothesis? Did Jesus really exist? (Dawkins and his fellow atheists seriously question whether Jesus was an actual figure of history; cf. the film The God Who Wasn't There).

For the target phrase METHINKS IT IS LIKE A WEASEL, Dawkins bypasses the Shakespeare hypothesis -- that would be too obvious and too intelligent-design friendly. Instead of positing Shakespeare, who would be an intelligence or designer responsible for the text in question (designers are a no-go in conventional evolutionary theory), Dawkins asks his readers to suppose an evolutionary algorithm that evolves the target phrase. But such an evolutionary algorithm privileges the target phrase by adapting the fitness landscape so that it assigns greater fitness to phrases that have more corresponding letters in common with the target.

And where did that fitness landscape come from? Such a landscape potentially exists for any phrase whatsoever, and not just for METHINKS IT IS LIKE A WEASEL. Dawkins's evolutionary algorithm could therefore have evolved in any direction, and the only reason it evolved to METHINKS IT IS LIKE A WEASEL is that he carefully selected the fitness landscape to give the desired result. Dawkins therefore got rid of Shakespeare as the author of METHINKS IT IS LIKE A WEASEL, only to reintroduce him as the (co)author of the fitness landscape that facilitates the evolution of METHINKS IT IS LIKE A WEASEL.

The bogusness of this example, with its sleight-of-hand misdirection, has been discussed ad nauseam by me and my colleagues in the ID community. We've spent so much time and ink on this example not because of its intrinsic merit, but because the evolutionary community itself remains so wedded to it and endlessly repeats its underlying fallacy in ever increasingly convoluted guises (AVIDA, Tierra, ev, etc.). For a careful deconstruction of Dawkins's WEASEL, providing a precise simulation under user control, see the "Weasel Ware" project on the Evolutionary Informatics website: www.evoinfo.org/weasel.

How does conservation of information apply to this example? Straightforwardly. Obtaining METHINKS IT IS LIKE A WEASEL by blind search (e.g., by randomly throwing down Scrabble pieces in a line) is extremely improbable. So Dawkins proposes an evolutionary algorithm, his WEASEL program, to obtain this sequence with higher probability. Yes, this algorithm does a much better job, with much higher probability, of locating the target. But at what cost? At an even greater improbability cost than merely locating the target sequence by blind search.

Dawkins completely sidesteps this question of information cost. Foreswearing any critical examination of the origin of the information that makes his simulation work, he attempts instead, by rhetorical tricks, simply to induce in his readers a stupefied wonder at the power of evolution: "Gee, isn't it amazing how powerful evolutionary processes are given that they can produce sentences like METHINKS IT IS LIKE A WEASEL, which ordinarily require human intelligence." But Dawkins is doing nothing more than advise our hapless borrower with the juice loan to suppose a key to a safety deposit box with the money needed to pay it off. Whence the key? Likewise, whence the fitness landscape that rendered the evolution of METHINKS IT IS LIKE A WEASEL probable? In terms of conservation of information, the necessary information was not internally created but merely smuggled in, in this case, by Dawkins himself.

An Email Exchange with Richard Dawkins

Over a decade ago, I corresponded with Dawkins about his WEASEL computer simulation. In an email to me dated May 5, 2000, he responded to my criticism of the teleology hidden in that simulation. Note that he does not respond to the challenge of conservation of information directly, nor had I developed this idea with sufficient clarity at the time to use it in refutation. More on this shortly. Here's what he wrote, exactly as he wrote it:

The point about any phrase being equally eligible to be a target is covered on page 7 [of The Blind Watchmaker]: "Any old jumbled collection of parts is unique and, WITH HINDSIGHT, is as improbable as any other . . ." et seq.
More specifically, the point you make about the Weasel, is admitted, without fuss, on page 50: "Although the monkey/Shakespeare model is useful for explaining the distinction between single-step selection and cumulative selection, it is misleading in important ways. One of these is that, in each generation of selective 'breeding', the mutant 'progeny' phrases were judged according to the criterion of resemblance to a DISTANT IDEAL target ... Life isn't like that."

In real life of course, the criterion for optimisation is not an arbitrarily chosen distant target but SURVIVAL. It's as simple as that. This is non-arbitrary. See bottom of page 8 to top of page 9. And it's also a smooth gradient, not a sudden leap from a flat plain in the phase space. Or rather it must be a smooth gradient in all those cases where evolution has actually happened. Maybe there are theoretical optima which cannot be reached because the climb is too precipitous.

The Weasel model, like any model, was supposed to make one point only, not be a complete replica of the real thing. I invented it purely and simply to counter creationists who had naively assumed that the phase space was totally flat except for one vertical peak (what I later represented as the precipitous cliff of Mount Improbable). The Weasel model is good for refuting this point, but it is misleading if it is taken to be a complete model of Darwinism. That is exactly why I put in the bit on page 50.

Perhaps you should look at the work of Spiegelman and others on evolution of RNA molecules in an RNA replicase environment. They have found that, repeatedly, if you 'seed' such a solution with an RNA molecule, it will converge on a particular size and form of 'optimal' replicator, sometimes called Spiegelman's minivariant. Maynard Smith gives a good brief account of it in his The Problems of Biology (see Spiegelman in the index). Orgel extended the work, showing that different chemical environments select for different RNA molecules.

The theory is so beautiful, so powerful. Why are you people so wilfully blind to its simple elegance? Why do you hanker after "design" when surely you must see that it doesn't explain anything? Now THAT's what I call a regress. You are a fine one to talk about IMPORTING complexity. "Design" is the biggest import one could possibly imagine.

Dawkins's email raises a number of interesting questions that, in the years since, have received extensive discussion among the various parties debating intelligent design. The who-designed-the-designer regress, whether a designing intelligence must itself be complex in the same way that biological systems are complex, the conditions under which evolution is complexity-increasing vs. complexity-decreasing, the evolutionary significance of Spiegelman's minivariants, and how the geometry of the fitness landscape facilitates or undercuts evolution have all been treated at length in the design literature and won't be rehearsed here (for more on these questions, see my books No Free Lunch and The Design Revolution as well as Michael Behe's The Edge of Evolution).
"Just One Word: Plastics"

Where I want to focus is Dawkins's one-word answer to the charge that his WEASEL simulation incorporates an unwarranted teleology -- unwarranted by the Darwinian understanding of evolution for which his Blind Watchmaker is an apologetic. The key line in the above quote is, "In real life of course, the criterion for optimisation is not an arbitrarily chosen distant target but SURVIVAL." Survival is certainly a necessary condition for life to evolve. If you're not surviving, you're dead, and if you're dead, you're not evolving -- period. But to call "survival," writ large, a criterion for optimization is ludicrous. As I read this, I have images of Dustin Hoffman in The Graduate being taken aside at a party by an executive who is about to reveal the secret of success: PLASTICS (you can watch the clip by clicking here). For the greatest one-word simplistic answers ever given, Dawkins's ranks right up there.

But perhaps I'm reading Dawkins uncharitably. Presumably, what he really means is differential survival and reproduction as governed by natural selection and random variation. Okay, I'm willing to buy that this is what he means. But even on this more charitable reading, his characterization of evolution is misleading and wrong. Ken Miller elaborates on this more charitable reading in his recent book Only a Theory. There he asks what's needed to drive the increase in biological information over the course of evolution. His answer? "Just three things: selection, replication, and mutation... Where the information 'comes from' is, in fact, from the selective process itself."

It's easy to see that Miller is blowing smoke even without the benefits of modern information theory. All that's required is to understand some straightforward logic, uncovered in Darwin's day, about the nature of scientific explanation in teasing apart possible causes. Indeed, biology's reception of Darwinism might have been far less favorable had scientists paid better attention to Darwin's contemporary John Stuart Mill. In 1843, sixteen years before the publication of Darwin's Origin of Species, Mill published the first edition of his System of Logic (which by the 1880s had gone through eight editions). In that work Mill lays out various methods of induction. The one that interests us here is his method of difference. In his System of Logic, Mill described this method as follows:

If an instance in which the phenomenon under investigation occurs, and an instance in which it does not occur, have every circumstance in common save one, that one occurring only in the former; the circumstance in which alone the two instances differ is the effect, or the cause, or an indispensable part of the cause, of the phenomenon.
Essentially, this method says that to discover which of a set of circumstances is responsible for an observed difference in outcomes requires finding a difference in the circumstances. An immediate corollary is that common circumstances cannot explain a difference in outcomes. Thus, if one person is sober and another drunk, and if both ate chips, salsa, and popcorn, this fact, common to both, does not, and indeed cannot, explain the difference. Rather, the difference is explained by one abstaining from alcohol and the other drinking too much. Mill's method of difference, so widely used in everyday life as well as in science, is crucially relevant to evolutionary biology. In fact, it helps bring some sense of proportion and reality to the inflated claims so frequently made on behalf of Darwinian processes.
Case in point: Miller's overselling of Darwinian evolution by claiming that "what's needed to drive" increases in biological information is "just three things: selection, replication, and mutation." Mill's method of difference gives the lie to Miller's claim. It's easy to write computer simulations that feature selection, replication, and mutation (or SURVIVAL writ large, or differential survival and reproduction, or any such reduction of evolution to Darwinian principles) -- and that go absolutely nowhere. Taken together, selection, replication, and mutation are not a magic bullet, and need not solve any interesting problems or produce any salient patterns. That said, evolutionary computation does get successfully employed in the field of optimization, so it is possible to write computer simulations that feature selection, replication, and mutation and that do go somewhere, solving interesting problems or producing salient patterns. But precisely because selection, replication, and mutation are common to all such simulations, they cannot, as Mill's method underscores, account for the difference.

One Boeing engineer used to call himself a "penalty-function artist." A penalty function is just another term for fitness landscape (though the numbers are reversed -- the higher the penalty, the lower the fitness). Coming up with the right penalty functions enabled this person to solve his engineering problems. Most such penalty functions, however, are completely useless. Moreover, all such functions operate within the context of an evolutionary computing environment that features Miller's triad of selection, replication, and mutation. So what makes the difference? It's that the engineer, with knowledge of the problem he's trying to solve, carefully adapts the penalty function to the problem and thereby raises the probability of successfully finding a solution. He's not just choosing his penalty functions willy-nilly. If he did, he wouldn't be working at Boeing. He's an artist, and his artistry (intelligent design) consists in being able to find the penalty functions that solve his problems.

I've corresponded with both Miller and Dawkins since 2000. Miller and I have sparred on a number of occasions in public debate (as recently as June 2012, click here). Dawkins refuses all such encounters. Regardless, we are familiar with each other's work, and yet I've never been able to get from either of them a simple admission that the logic in Mill's method of difference is valid and that it applies to evolutionary theory, leaving biology's information problem unresolved even after the Darwinian axioms of selection, replication, and variation are invoked.

John Stuart Mill's Inconvenient Truth

Instead, Miller remains an orthodox Darwinist, and Dawkins goes even further, embracing a universal Darwinism that sees Darwinian evolution as the only conceivable scientific explanation of life's diversification in natural history. As he wrote in The Blind Watchmaker and continues to believe:

My argument will be that Darwinism is the only known theory that is in principle capable of explaining certain aspects of life. If I am right it means that, even if there were no actual evidence in favor of the Darwinian theory (there is, of course) we should still be justified in preferring it over all rival theories.
Mill's method of difference is an inconvenient truth for Dawkins and Miller, but it's a truth that must be faced. For his willingness to face this truth, I respect Stuart Kauffman infinitely more than either Miller or Dawkins. Miller and Dawkins are avid Darwinists committed to keeping the world safe for their patron saint. Kauffman is a free spirit, willing to admit problems where they arise. Kauffman at least sees that there is a problem in claiming that the Darwinian mechanism can generate biological information, even if his own self-organizational approach is far from resolving it. As Kauffman writes in Investigations:
If mutation, recombination, and selection only work well on certain kinds of fitness landscapes, yet most organisms are sexual, and hence use recombination, and all organisms use mutation as a search mechanism, where did these well-wrought fitness landscapes come from, such that evolution manages to produce the fancy stuff around us?
According to Kauffman, "No one knows."
Kauffman's observation here is entirely in keeping with conservation of information. Indeed, he offers this observation in the context of discussing the No Free Lunch theorems, of which conservation of information is a logical extension. The fitness landscape supplies the evolutionary process with information. Only finely tuned fitness landscapes that are sufficiently smooth, don't isolate local optima, and, above all, reward ever-increasing complexity in biological structure and function are suitable for driving a full-fledged evolutionary process. So where do such fitness landscapes come from? Absent an extrinsic intelligence, the only answer would seem to be the environment.

Just as I have heard SURVIVAL as a one-word resolution to the problem of generating biological information, so also have I heard ENVIRONMENT. Ernan McMullin, for instance, made this very point to me over dinner at the University of Chicago in 1999, intoning this word ("environment") as though it were the solution to all that ails evolution. Okay, so the environment supplies the information needed to drive biological evolution. But where did the environment get that information? From itself? The problem with such an answer is this: conservation of information entails that, without added information, biology's information problem remains constant (breaks even) or intensifies (gets worse) the further back in time we trace it.

The whole magic of evolution is that it's supposed to explain subsequent complexity in terms of prior simplicity, but conservation of information says that there never was a prior state of primordial simplicity -- the information, absent external input, had to be there from the start. It is no feat of evolutionary theorizing to explain how cavefish lost the use of their eyes after long periods of being deprived of light. Functioning eyes turning into functionless eye nubs is a devolution from complexity to simplicity. As a case of use-it-or-lose-it, it does not call for explanation. Evolution wins plaudits for purporting to explain how things like eyes that see can evolve in the first place from prior simpler structures that cannot see.

If the evolutionary process could indeed create such biological information, then evolution from simplicity to complexity would be unproblematic. But the evolutionary process as conceived by Darwin and promulgated by his successors is non-teleological. Accordingly, it cannot employ the activity of intelligence in any guise to increase biological information. But without intelligent input, conservation of information implies that as we regress biological information back in time, the amount of information to be accounted for never diminishes and may actually increase.

Explaining Walmart's Success by Invoking Interstate Highways

Given conservation of information and the absence of intelligent input, biological information with the complexity we see now must have always been present in the universe in some form or fashion, going back even as far as the Big Bang. But where in the Big Bang, with a heat and density that rule out any life form in the early history of the universe, is the information for life's subsequent emergence and development on planet Earth? Conservation of information says this information has to be there, in embryonic form, at the Big Bang and at every moment thereafter. So where is it? How is it represented? In the environment, you say? Invoking the environment as evolution's information source is empty talk, on the order of invoking the interstate highway system as the reason for Walmart's business success. There is some connection, to be sure, but neither provides real insight or explanation.

To see more clearly what's at stake here, imagine Scrabble pieces arranged in sequence to spell out meaningful sentences (such as METHINKS IT IS LIKE A WEASEL). Suppose a machine with suitable sensors, movable arms, and grips, takes the Scrabble pieces out of a box and arranges them in this way. To say that the environment has arranged the Scrabble pieces to spell out meaningful sentences is, in this case, hardly illuminating. Yes, broadly speaking, the environment is arranging the pieces into meaningful sentences. But, more precisely, a robotic machine, presumably running a program with meaningful sentences suitably coded, is doing the arranging.

Merely invoking the environment, without further amplification, therefore explains nothing about the arrangement of Scrabble pieces into meaningful sentences. What exactly is it about the environment that accounts for the information conveyed in those arrangements of Scrabble pieces? And what about the environment accounts for the information conveyed in the organization of biological systems? That's the question that needs to be answered. Without an answer to this question, appeals to the environment are empty and merely cloak our ignorance of the true sources of biological information.

With a machine that arranges Scrabble pieces, we can try to get inside it and see what it does ("Oh, there's the code that spells out METHINKS IT IS LIKE A WEASEL"). With the actual environment for biological evolution, we can't, as it were, get under the hood of the car. We see natural forces such as wind, waves, erosion, lightning, Brownian motion, attraction, repulsion, bonding affinities and the like. And we see slippery slopes on which one organism thrives and another founders. If such an environment were arranging Scrabble pieces in sequence, we would observe the pieces blown by wind or jostled by waves or levitated by magnets. And if, at the end of the day, we found Scrabble pieces spelling out coherent English sentences, such as METHINKS IT IS LIKE A WEASEL, we would be in our rights to infer that an intelligence had in some way co-opted the environment and inserted information, even though we have no clue how.

Such a role for the environment, as an inscrutable purveyor of information, is, however, unacceptable to mainstream evolutionary theorists. In their view, the way the environment inputs information into biological systems over the course of evolution is eminently scrutable. It happens, so they say, by a gradual accumulation of information as natural selection locks in on small advantages, each of which can arise by chance without intelligent input. But what's the evidence here?

This brings us back to the knock-out experiments that Ken Miller has repeatedly put forward to refute intelligent design, in which a structure responsible for a function has been disabled and then, through selection pressure, it, or something close to it capable of the lost function, gets recovered. In all his examples, there is no extensive multi-step sequence of structural changes each of which lead to a distinct functional advantage. Usually, it's just a single nucleotide base or amino acid change that's needed to recover function.

This is true even with the evolution of nylonase, mentioned earlier. Nylonase is not the result of an entirely new DNA sequence coding for that enzyme. Rather, it resulted from a frameshift in existing DNA, shifting over some genetic letters and thus producing the gene for nylonase. The origin of nylonase is thus akin to changing the meaning of "therapist" by inserting a space and getting "the rapist." For the details about the evolution of nylonase, see a piece I did in response to Miller at Uncommon Descent (click here).

The Two-Pronged Challenge of Intelligent Design

Intelligent design has always mounted a two-pronged challenge to conventional evolutionary theory. On the one hand, design proponents have challenged common ancestry. Discontinuities in the fossil record and in supposed molecular phylogenies have, for many of us (Michael Behe has tended to be the exception), made common ancestry seem far from compelling. Our reluctance here is not an allergic reaction but simply a question of evidence -- many of us in the ID community see the evidence for common ancestry as weak, especially when one leaves the lower taxonomic groupings and moves to the level of orders, classes, and, above all, phyla (as with the Cambrian explosion, in which all the major animal phyla appear suddenly, lacking evident precursors in the Precambrian rocks). And indeed, if common ancestry fails, so does conventional evolutionary theory.

On the other hand, design proponents have argued that even if common ancestry holds, the evidence of intelligence in biology is compelling. Conservation of information is part of that second-prong challenge to evolution. Evolutionary theorists like Miller and Dawkins think that if they can break down the problem of evolving a complex biological system into a sequence of baby-steps, each of which is manageable by blind search (e.g., point mutations of DNA) and each of which confers a functional advantage, then the evidence of design vanishes. But it doesn't. Regardless of the evolutionary story told, conservation of information shows that the information in the final product had to be there from the start.

It would actually be quite a remarkable property of nature if fitness across biological configuration space were so distributed that advantages could be cumulated gradually by a Darwinian process. Frankly, I don't see the evidence for this. The examples that Miller cites show some small increases in information associated with recovering and enhancing a single biological function but hardly the massive ratcheting up of information in which structures and functions co-evolve and lead to striking instances of biological invention. The usual response to my skepticism is, Give evolution more time. I'm happy to do that, but even if time allows evolution to proceed much more impressively, the challenge that conservation of information puts to evolution remains.

In the field of technological (as opposed to biological) evolution, revolutionary new inventions never result by gradual tinkering with existing technologies. Existing technologies may, to be sure, be co-opted for use in a revolutionary technology. Thus, when Alexander Graham Bell invented the telephone, he used existing technologies such as wires, electrical circuits, and diaphragms. But these were put together and adapted for a novel, and at the time unprecedented, use.

But what if technological evolution proceeded in the same way that, as we are told, biological evolution proceeds, with inventions useful to humans all being accessible by gradual tinkering from one or a few primordial inventions? One consequence would be that tinkerers who knew nothing about the way things worked but simply understood what it was to benefit from a function could become inventors on the order of Bell and Edison. More significantly, such a state of affairs would also indicate something very special about the nature of human invention, namely, that it was distributed continuously across technological configuration space. This would be remarkable. Granted, we don't see this. Instead, we see sharply disconnected islands of invention inaccessible to one another by mere gradual tinkering. But if such islands were all connected (by long and narrow isthmuses of function), it would suggest a deeper design of technological configuration space for the facilitation of human invention.

The same would be true of biological invention. If biological evolution proceeds by a gradual accrual of functional advantages, instead of finding itself deadlocked on isolated islands of function surrounded by vast seas of non-function, then the fitness landscape over biological configuration space has to be very special indeed (recall Stuart Kauffman's comments to that effect earlier in this piece). Conservation of information goes further and says that any information we see coming out of the evolutionary process was already there in this fitness landscape or in some other aspect of the environment or was inserted by an intervening intelligence. What conservation of information guarantees did not happen is that the evolutionary process created this information from scratch.

Some years back I had an interesting exchange with Simon Conway Morris about the place of teleology in evolution. According to him, the information that guides the evolutionary process is embedded in nature and is not reducible to the Darwinian mechanism of selection, replication, and mutation. He stated this forthrightly in an email to me dated February 20, 2003, anticipating his then forthcoming book Life's Solution. I quote this email rather than the book because it clarifies his position better than anything that I've read from him subsequently. Here's the quote from his email:

As it happens, I am not sure we are so far apart, at least in some respects. Both of us, I imagine, accept that we are part of God's good Creation, and that despite its diversity, by no means all things are possible. In my forthcoming book Life's Solution (CUP) I argue that hard-wired into the universe are such biological properties of intelligence. This implies a "navigation" by evolution across immense "hyperspaces" of biological alternatives, nearly all of which are maladaptive [N.B. -- this means the adaptive hyperspaces form a very low-probability target!]. These thin roads (or "worm-holes") of evolution define a deeper biological structure, the principal evidence for which is convergence (my old story). History and platonic archetypes, if you like, meet. That does seem to me to be importantly distinct from ID: my view of Creation is not only very rich (self-evidently), but has an underlying structure that allows evolution to act. Natural selection, after all, is only a mechanism; what we surely agree about is the nature of the end-products, even if we disagree as to how they came about. Clearly my view is consistent with a Christian world picture, but can never be taken as proof.
There's not much I disagree with here. My one beef with Conway Morris is that he's too hesitant about finding evidence (what he calls "proof") for teleology in the evolutionary process. I critique this hesitancy in my review of Life's Solution for Books & Culture, a review that came out the year after this email (click here for the review). Conway Morris's fault is that he does not follow his position through to its logical conclusion. He prefers to critique conventional evolutionary theory, with its tacit materialism, from the vantage of theology and metaphysics. Convergence points to a highly constrained evolutionary process that's consistent with divine design. Okay, but there's more.
If evolution is so tightly constrained and the Darwinian mechanism of natural selection is just that, a mechanism, albeit one that "navigates immense hyperspaces of biological alternatives" by confining itself to "thin roads of evolution defining a deeper biological structure," then, in the language of conservation of information, the conditions that allow evolution to act effectively in producing the complexity and diversity of life is but a tiny subset, and therefore a small-probability target, among all the conditions under which evolution might act. And how did nature find just those conditions? Nature has, in that case, embedded in it not just a generic evolutionary process employing selection, replication, and mutation, but one that is precisely tuned to produce the exquisite adaptations, or, dare I say, designs, that pervade biology.

Where Conway Morris merely finds consistency with his Christian worldview (tempered by a merger of Darwin and Plotinus), conservation of information shows that the evolutionary process has embedded in it rich sources of information that a thoroughgoing materialism cannot justify and has no right to expect. The best such a materialism can do is count it a happy accident that evolution acts effectively, producing ever increasing biological complexity and diversity, when most ways it might act would be ineffective, producing no life at all or ecosystems that are boring (a disproportion mirrored in the evolutionary computing literature, where most fitness landscapes are maladaptive).

The Lesson of Conservation of Information

The improbabilities associated with rendering evolution effective are therefore no more tractable than the improbabilities that face an evolutionary process dependent purely on blind search. This is the relevance of conservation of information for evolution: it shows that the vast improbabilities that evolution is supposed to mitigate in fact never do get mitigated. Yes, you can reach the top of Mount Improbable, but the tools that enable you to find a gradual ascent up the mountain are as improbably acquired as simply scaling it in one fell swoop. This is the lesson of conservation of information.

One final question remains, namely, what is the source of information in nature that allows targets to be successfully searched? If blind material forces can only redistribute existing information, then where does the information that allows for successful search, whether in biological evolution or in evolutionary computing or in cosmological fine-tuning or wherever, come from in the first place? The answer will by now be obvious: from intelligence. On materialist principles, intelligence is not real but an epiphenomenon of underlying material processes. But if intelligence is real and has real causal powers, it can do more than merely redistribute information -- it can also create it.

Indeed, that is the defining property of intelligence, its ability to create information, especially information that finds needles in haystacks. This fact should be more obvious and convincing to us than any fact of the natural sciences since (1) we ourselves are intelligent beings who create information all the time through our thoughts and language and (2) the natural sciences themselves are logically downstream from our ability to create information (if we were not information creators, we could not formulate our scientific theories, much less search for those that are empirically adequate, and there would be no science). Materialist philosophy, however, has this backwards, making a materialist science primary and then defining our intelligence out of existence because materialism leaves no room for it. The saner course would be to leave no room for materialism.

I close with a quote from Descartes, whose substance dualism notwithstanding, rightly understood that intelligence could never be reduced to brute blind matter acting mechanistically. The quote is from his Discourse on Method. As you read it, bear in mind that for the materialist, everything is a machine, be it themselves, the evolutionary process, or the universe taken as a whole. Everything, for the materialist, is just brute blind matter acting mechanistically. Additionally, as you read this, bear in mind that conservation of information shows that this materialist vision is fundamentally incomplete, unable to account for the information that animates nature. Here is the quote:

Although machines can perform certain things as well as or perhaps better than any of us can do, they infallibly fall short in others, by which means we may discover that they did not act from knowledge, but only from the disposition of their organs. For while reason is a universal instrument which can serve for all contingencies, these organs have need of some special adaptation for every particular action. From this it follows that it is morally impossible that there should be sufficient diversity in any machine to allow it to act in all the events of life in the same way as our reason causes us to act.