Conservation of Information Made Simple
William A. Dembski August 28, 2012 3:59 PM
In the 1970s, Doubleday published a series of books with the title "Made Simple." This series covered a variety of academic topics (Statistics Made Simple, Philosophy Made Simple, etc.). The 1980s saw the "For Dummies" series, which expanded the range of topics to include practical matters such as auto repair. The "For Dummies" series has since been replicated, notably by guides for "Complete Idiots." All books in these series attempt, with varying degrees of success, to break down complex subjects, helping students to learn a topic, especially when they've been stymied by more conventional approaches and textbooks.
In this article, I'm going to follow the example of these books, laying out as simply and clearly as I can what conservation of information is and why it poses a challenge to conventional evolutionary thinking. I'll break this concept down so that it seems natural and straightforward. Right now, it's too easy for critics of intelligent design to say, "Oh, that conservation of information stuff is just mumbo-jumbo. It's part of the ID agenda to make a gullible public think there's some science backing ID when it's really all smoke and mirrors." Conservation of information is not a difficult concept and once it is understood, it becomes clear that evolutionary processes cannot create the information required to power biological evolution.
Conservation of Information: A Brief History
Conservation of information is a term with a short history. Biologist Peter Medawar used it in the 1980s to refer to mathematical and computational systems that are limited to producing logical consequences from a given set of axioms or starting points, and thus can create no novel information (everything in the consequences is already implicit in the starting points). His use of the term is the first that I know, though the idea he captured with it is much older. Note that he called it the "Law of Conservation of Information" (see his The Limits of Science, 1984).
Computer scientist Tom English, in a 1996 paper, also used the term conservation of information, though synonymously with the then recently proved results by Wolpert and Macready about No Free Lunch (NFL). In English's version of NFL, "the information an optimizer gains about unobserved values is ultimately due to its prior information of value distributions." As with Medawar's form of conservation of information, information for English is not created from scratch but rather redistributed from existing sources.
Conservation of information, as the idea is being developed and gaining currency in the intelligent design community, is principally the work of Bob Marks and myself, along with several of Bob's students at Baylor (see the publications page at www.evoinfo.org). Conservation of information, as we use the term, applies to search. Now search may seem like a fairly restricted topic. Unlike conservation of energy, which applies at all scales and dimensions of the universe, conservation of information, in focusing on search, may seem to have only limited physical significance. But in fact, conservation of information is deeply embedded in the fabric of nature, and the term does not misrepresent its own importance.
Search is a very general phenomenon. The reason we don't typically think of search in broad terms applicable to nature generally is that we tend to think of it narrowly in terms of finding a particular predefined object. Thus our stock example of search is losing one's keys, with search then being the attempt to recover them. But we can also search for things that are not pre-given in this way. Sixteenth-century explorers were looking for new, uncharted lands. They knew when they found them that their search had been successful, but they didn't know exactly what they were looking for. U2 has a song titled "I Still Haven't Found What I'm Looking For." How will Bono know once he's found what he's looking for? Often we know that we've found it even though it's nothing like what we expected, and sometimes even violates our expectations.
Another problem with extending search to nature in general is that we tend to think of search as confined to human contexts. Humans search for keys, and humans search for uncharted lands. But, as it turns out, nature is also quite capable of search. Go to Google and search on the term "evolutionary search," and you'll get quite a few hits. Evolution, according to some theoretical biologists, such as Stuart Kauffman, may properly be conceived as a search (see his book Investigations). Kauffman is not an ID guy, so there's no human or human-like intelligence behind evolutionary search as far as he's concerned. Nonetheless, for Kauffman, nature, in powering the evolutionary process, is engaged in a search through biological configuration space, searching for and finding ever-increasing orders of biological complexity and diversity.
An Age of Search
Evolutionary search is not confined to biology but also takes place inside computers. The field of evolutionary computing (which includes genetic algorithms) falls broadly under that area of mathematics known as operations research, whose principal focus is mathematical optimization. Mathematical optimization is about finding solutions to problems where the solutions admit varying and measurable degrees of goodness (optimality). Evolutionary computing fits this mold, seeking items in a search space that achieve a certain level of fitness. These are the optimal solutions. (By the way, the irony of doing a Google "search" on the target phrase "evolutionary search," described in the previous paragraph, did not escape me. Google's entire business is predicated on performing optimal searches, where optimality is gauged in terms of the link structure of the web. We live in an age of search!)
If the possibilities connected with search now seem greater to you than they have in the past, extending beyond humans to computers and biology in general, they may still seem limited in that physics appears to know nothing of search. But is this true? The physical world is life-permitting -- its structure and laws allow (though they are far from necessitating) the existence of not just cellular life but also intelligent multicellular life. For the physical world to be life-permitting in this way, its laws and fundamental constants need to be configured in very precise ways. Moreover, it seems far from mandatory that those laws and constants had to take the precise form that they do. The universe itself, therefore, can be viewed as the solution to the problem of making life possible. But problem solving itself is a form of search, namely, finding the solution (among a range of candidates) to the problem.
Still, for many scientists, search fits uneasily in the natural sciences. Something unavoidably subjective and teleological seems involved in search. Search always involves a goal or objective, as well as criteria of success and failure (as judged by what or whom?) depending on whether and to what degree the objective has been met. Where does that objective, typically known as a target, come from other than from the minds of human inquirers? Are we, as pattern-seeking and pattern-inventing animals, simply imposing these targets/patterns on nature even though they have no independent, objective status?
This concern has merit, but it needs not to be overblown. If we don't presuppose a materialist metaphysics that makes mind, intelligence, and agency an emergent property of suitably organized matter, then it is an open question whether search and the teleology inherent in it are mere human constructions on the one hand, or, instead, realities embedded in nature on the other. What if nature is itself the product of mind and the patterns it exhibits reflect solutions to search problems formulated by such a mind?
Scientific inquiry that's free of prejudice and narrowly held metaphysical assumptions should, it seems, leave open both these possibilities. After all, the patterns we're talking about are not like finding a vague likeness of Santa Claus's beard in a cloud formation. Who, if they look hard enough, won't see Santa's beard? The fine-tuning of nature's laws and constants that permits life to exist at all is not like this. It is a remarkable pattern and may properly be regarded as the solution to a search problem as well as a fundamental feature of nature, or what philosophers would call a natural kind, and not merely a human construct. Whether an intelligence is responsible for the success of this search is a separate question. The standard materialist line in response to such cosmological fine-tuning is to invoke multiple universes and view the success of this search as a selection effect: most searches ended without a life-permitting universe, but we happened to get lucky and live in a universe hospitable to life.
In any case, it's possible to characterize search in a way that leaves the role of teleology and intelligence open without either presupposing them or deciding against them in advance. Mathematically speaking, search always occurs against a backdrop of possibilities (the search space), with the search being for a subset within this backdrop of possibilities (known as the target). Success and failure of search are then characterized in terms of a probability distribution over this backdrop of possibilities, the probability of success increasing to the degree that the probability of locating the target increases.
For example, consider all possible L-amino acid sequences joined by peptide bonds of length 100. This we can take as our reference class or backdrop of possibilities -- our search space. Within this class, consider those sequences that fold and thus might form a functioning protein. This, let us say, is the target. This target is not merely a human construct. Nature itself has identified this target as a precondition for life -- no living thing that we know can exist without proteins. Moreover, this target admits some probabilistic estimates. Beginning with the work of Robert Sauer, cassette mutagenesis and other experiments of this sort performed over the last three decades suggest that the target has probability no more than 1 in 10^60 (assuming a uniform probability distribution over all amino acid sequences in the reference class).
The mathematics characterizing search in this way is straightforward and general. Whether in specific situations a search so characterized also involves unavoidably subjective human elements or reflects objectively given realities embedded in nature can be argued independently of the mathematics. Such an argument speaks to the interpretation of the search, not to the search itself. Such an argument parallels controversies surrounding the interpretation of quantum mechanics: whether quantum mechanics is inherently a mind-based, observer-dependent theory; whether it can be developed independently of observers; whether it is properly construed as reflecting a deterministic, mind-independent, multiuniverse, etc. Quantum mechanics itself is a single, well-defined theory that admits several formulations, all of which are mathematically equivalent. Likewise, search as described here has a single, straightforward theoretical underpinning.
An Easter Egg Hunt, from the Scientific Vantage
One clarification is worth inserting here while we're still setting the stage for conservation of information. For most people, when it comes to search, the important thing is the outcome of the search. Take an Easter egg hunt. The children looking for Easter eggs are concerned with whether they find the eggs. From the scientific vantage, however, the important thing about search is not the particular outcomes but the probability distribution over the full range of possible outcomes in the search space (this parallels communication theory, in which what's of interest is not particular messages sent across a communication channel but the range of possible messages and their probability distribution). The problem with just looking at outcomes is that a search might get lucky and find the target even if the probabilities are against it.
Take an Easter egg hunt in which there's just one egg carefully hidden somewhere in a vast area. This is the target and blind search is highly unlikely to find it precisely because the search space is so vast. But there's still a positive probability of finding the egg even with blind search, and if the egg is discovered, then that's just how it is. It may be, because the egg's discovery is so improbable, that we might question whether the search was truly blind and therefore reject this (null) hypothesis. Maybe it was a guided search in which someone, with knowledge of the egg's whereabouts, told the seeker "warm, warmer, no colder, warmer, warmer, hot, hotter, you're burning up." Such guidance gives the seeker added information that, if the information is accurate, will help locate the egg with much higher probability than mere blind search -- this added information changes the probability distribution.
But again, the important issue, from a scientific vantage, is not how the search ended but the probability distribution under which the search was conducted. You don't have to be a scientist to appreciate this point. Suppose you've got a serious medical condition that requires treatment. Let's say there are two treatment options. Which option will you go with? Leaving cost and discomfort aside, you'll want the treatment with the better chance of success. This is the more effective treatment. Now, in particular circumstances, it may happen that the less effective treatment leads to a good outcome and the more effective treatment leads to a bad outcome. But that's after the fact. In deciding which treatment to take, you'll be a good scientist and go with the one that has the higher probability of success.
The Easter egg hunt example provides a little preview of conservation of information. Blind search, if the search space is too large and the number of Easter eggs is too small, is highly unlikely to successfully locate the eggs. A guided search, in which the seeker is given feedback about his search by being told when he's closer or farther from the egg, by contrast, promises to dramatically raise the probability of success of the search. The seeker is being given vital information bearing on the success of the search. But where did this information that gauges proximity of seeker to egg come from? Conservation of information claims that this information is itself as difficult to find as locating the egg by blind search, implying that the guided search is no better at finding the eggs than blind search once this information must be accounted for.
Conservation of Information in Evolutionary Biology
In the sequel, I will focus mainly on conservation of information as it applies to search in evolutionary biology (and by extension in evolutionary computing), trusting that once the case for conservation of information is made in biology, its scope and applicability for the rest of the natural sciences will be that much more readily accepted and acceptable. As it is, evolutionary biologists possessing the mathematical tools to understand search are typically happy to characterize evolution as a form of search. And even those with minimal knowledge of the relevant mathematics fall into this way of thinking.
Take Brown University's Kenneth Miller, a cell biologist whose knowledge of the relevant mathematics I don't know. Miller, in attempting to refute ID, regularly describes examples of experiments in which some biological structure is knocked out along with its function, and then, under selection pressure, a replacement structure is evolved that recovers the function. What makes these experiments significant for Miller is that they are readily replicable, which means that the same systems with the same knockouts will undergo the same recovery under the same suitable selection regime. In our characterization of search, we would say the search for structures that recover function in these knockout experiments achieves success with high probability.
Suppose, to be a bit more concrete, we imagine a bacterium capable of producing a particular enzyme that allows it to live off a given food source. Next, we disable that enzyme, not by removing it entirely but by, say, changing a DNA base in the coding region for this protein, thus changing an amino acid in the enzyme and thereby drastically lowering its catalytic activity in processing the food source. Granted, this example is a bit stylized, but it captures the type of experiment Miller regularly cites.
So, taking these modified bacteria, the experimenter now subjects them to a selection regime that starts them off on a food source for which they don't need the enzyme that's been disabled. But, over time, they get more and more of the food source for which the enzyme is required and less and less of other food sources for which they don't need it. Under such a selection regime, the bacterium must either evolve the capability of processing the food for which previously it needed the enzyme, presumably by mutating the damaged DNA that originally coded for the enzyme and thereby recovering the enzyme, or starve and die.
So where's the problem for evolution in all this? Granted, the selection regime here is a case of artificial selection -- the experimenter is carefully controlling the bacterial environment, deciding which bacteria get to live or die. But nature seems quite capable of doing something similar. Nylon, for instance, is a synthetic product invented by humans in 1935, and thus was absent from bacteria for most of their history. And yet, bacteria have evolved the ability to digest nylon by developing the enzyme nylonase. Yes, these bacteria are gaining new information, but they are gaining it from their environments, environments that, presumably, need not be subject to intelligent guidance. No experimenter, applying artificial selection, for instance, set out to produce nylonase.
To see that there remains a problem for evolution in all this, we need to look more closely at the connection between search and information and how these concepts figure into a precise formulation of conservation of information. Once we have done this, we'll return to the Miller-type examples of evolution to see why evolutionary processes do not, and indeed cannot, create the information needed by biological systems. Most biological configuration spaces are so large and the targets they present are so small that blind search (which ultimately, on materialist principles, reduces to the jostling of life's molecular constituents through forces of attraction and repulsion) is highly unlikely to succeed. As a consequence, some alternative search is required if the target is to stand a reasonable chance of being located. Evolutionary processes driven by natural selection constitute such an alternative search. Yes, they do a much better job than blind search. But at a cost -- an informational cost, a cost these processes have to pay but which they are incapable of earning on their own.
In the information-theory literature, information is usually characterized as the negative logarithm to the base two of a probability (or some logarithmic average of probabilities, often referred to as entropy). This has the effect of transforming probabilities into bits and of allowing them to be added (like money) rather than multiplied (like probabilities). Thus, a probability of one-eighths, which corresponds to tossing three heads in a row with a fair coin, corresponds to three bits, which is the negative logarithm to the base two of one-eighths. Such a logarithmic transformation of probabilities is useful in communication theory, where what gets moved across communication channels is bits rather than probabilities and the drain on bandwidth is determined additively in terms of number of bits. Yet, for the purposes of this "Made Simple" paper, we can characterize information, as it relates to search, solely in terms of probabilities, also cashing out conservation of information purely probabilistically.
Probabilities, treated as information used to facilitate search, can be thought of in financial terms as a cost -- an information cost. Think of it this way. Suppose there's some event you want to have happen. If it's certain to happen (i.e., has probability 1), then you own that event -- it costs you nothing to make it happen. But suppose instead its probability of occurring is less than 1, let's say some probability p. This probability then measures a cost to you of making the event happen. The more improbable the event (i.e., the smaller p), the greater the cost. Sometimes you can't increase the probability of making the event occur all the way to 1, which would make it certain. Instead, you may have to settle for increasing the probability to qwhere qis less than 1 but greater than p. That increase, however, must also be paid for. And in fact, we do pay to raise probabilities all the time. For instance, many students pay tuition costs to obtain a degree that will improve their prospects (i.e., probabilities) of landing a good, high-paying job.
A Fair Lottery
To illustrate this point more precisely, imagine that you are playing a lottery. Let's say it's fair, so that the government doesn't skim anything off the top (i.e., everything paid into the lottery gets paid out to the winner) and one ticket is sure to be the winner. Let's say a million lottery tickets have been purchased so far at one dollar apiece, exactly one of which is yours. Each lottery ticket therefore has the same probability of winning, so your lottery ticket has a one in a million chance of coming out on top (which is your present p value), entailing a loss of one dollar if you lose and nearly a million dollars if you win ($999,999 to be exact). Now let's say you really want to win this lottery -- for whatever reason you earnestly desire to hold the winning ticket in your hand. In that case, you can purchase additional tickets. By purchasing these, you increase your chance of winning the lottery. Let's say you purchase an additional million tickets at one dollar apiece. Doing so has now boosted your probability of winning the lottery from .000001 to .500001, or to about one-half.
Increasing the probability of winning the lottery has therefore incurred a cost. With a probability of roughly .5 of winning the lottery, you are now much more likely to gain approximately one million dollars. But it also cost you a million dollars to increase your probability of winning. As a result, your expected winnings, computed in standard statistical terms as the probability of losing multiplied by what you would lose subtracted from the probability of winning multiplied by what you would win, equals zero. Moreover, because this is a fair lottery, it equals zero when you only had one ticket purchased and it equals zero when you had an additional million tickets purchased. Thus, in statistical terms, investing more in this lottery has gained you nothing.
Conservation of information is like this. Not exactly like this because conservation of information focuses on search whereas the previous example focused on the economics of expected utility. But just as increasing your chances of winning a lottery by buying more tickets offers no real gain (it is not a long-term strategy for increasing the money in your pocket), so conservation of information says that increasing the probability of successful search requires additional informational resources that, once the cost of locating them is factored in, do nothing to make the original search easier.
To see how this works, let's consider a toy problem. Imagine that your search space consists of only six items, labeled 1 through 6. Let's say your target is item 6 and that you're going to search this space by rolling a fair die once. If it lands on 6, your search is successful; otherwise, it's unsuccessful. So your probability of success is 1/6. Now let's say you want to increase the probability of success to 1/2. You therefore find a machine that flips a fair coin and delivers item 6 to you if it lands heads and delivers some other item in the search space if it land tails. What a great machine, you think. It significantly boosts the probability of obtaining item 6 (from 1/6 to 1/2).
But then a troubling question crosses your mind: Where did this machine that raises your probability of success come from? A machine that tosses a fair coin and that delivers item 6 if the coin lands heads and some other item in the search space if it lands tails is easily reconfigured. It can just as easily deliver item 5 if it lands heads and some other item if it lands tails. Likewise for all the remaining items in the search space: a machine such as the one described can privilege any one of the six items in the search space, delivering it with probability 1/2 at the expense of the others. So how did you get the machine that privileges item 6? Well, you had to search among all those machines that flip coins and with probability 1/2 deliver a given item, selecting the one that delivers item 6 when it lands heads. And what's the probability of finding such a machine?
To keep things simple, let's imagine that our machine delivers item 6 with probability 1/2 and each of items 1 through 5 with equal probability, that is, with probability 1/10. Accordingly, this machine is one of six possible machines configured in essentially the same way. There's another machine that flips a coin, delivers item 1 from the original search space if it lands heads, and delivers any one of 2 through 6 with probability 1/10 each if the coin lands tails. And so on. Thus, of these six machines, one delivers item 6 with probability 1/2 and the remaining five machines deliver item 6 with probability 1/10. Since there are six machines, only one of which delivers item 6 (our target) with high probability, and since only labels and no intrinsic property distinguishes one machine from any other in this setup (the machines are, as mathematicians would say, isomorphic), the principle of indifference applies to these machines and prescribes that the probability of getting the machine that delivers item 6 with probability 1/2 is the same as that of getting any other machine, and is therefore 1/6.
But a probability of 1/6 to find a machine that delivers item 6 with probability 1/2 is no better than our original probability of 1/6 of finding the target simply by tossing a die. In fact, once we have this machine, we still have only a 50-50 chance of locating item 6. Finding this machine incurs a probability cost of 1/6, and once this cost is incurred we still have a probability cost of 1/2 of finding item 6. Since probability costs increase as probabilities decrease, we're actually worse off than we were at the start, where we simply had to roll a die that, with probability 1/6, locates item 6.
The probability of finding item 6 using this machine, once we factor in the probabilistic cost of securing the machine, therefore ends up being 1/6 x 1/2 = 1/12. So our attempt to increase the probability of finding item 6 by locating a more effective search for that item has actually backfired, making it in the end even more improbable that we'll find item 6. Conservation of information says that this is always a danger when we try to increase the probability of success of a search -- that the search, instead of becoming easier, remains as difficult as before or may even, as in this example, become more difficult once additional underlying information costs, associated with improving the search and often hidden, as in this case by finding a suitable machine, are factored in.
Why It Is Called "Conservation" of Information
The reason it's called "conservation" of information is that the best we can do is break even, rendering the search no more difficult than before. In that case, information is actually conserved. Yet often, as in this example, we may actually do worse by trying to improve the probability of a successful search. Thus, we may introduce an alternative search that seems to improve on the original search but that, once the costs of obtaining this search are themselves factored in, in fact exacerbate the original search problem.
In referring to ease and difficulty of search, I'm not being mathematically imprecise. Ease and difficulty, characterized mathematically, are always complexity-theoretic notions presupposing an underlying complexity measure. In this case, complexity is cashed out probabilistically, so the complexity measure is a probability measure, with searches becoming easier to the degree that successfully locating targets is more probable, and searches becoming more difficult to the degree that successfully locating targets is more improbable. Accordingly, it also makes sense to talk about the cost of a search, with the cost going up the more difficult the search, and the cost going down the easier the search.
In all these discussions of conservation of information, there's always a more difficult search that gets displaced by an easier search, but once the difficulty of finding the easier search (difficulty being understood probabilistically) is factored in, there's no gain, and in fact the total cost may have gone up. In other words, the actual probability of locating the target with the easier search is no greater, and may actually be less, than the probability of locating the target with the more difficult search once the probability of locating the easier search is factored in. All of this admits a precise mathematical formulation. Inherent in such a formulation is treating search itself as subject to search. If this sounds self-referential, it is. But it also makes good sense.
To see this, consider a treasure hunt. Imagine searching for a treasure chest buried on a large island. We consider two searches, a more difficult one and an easier one. The more difficult search, in this case, is a blind search in which, without any knowledge of where the treasure is buried, you randomly meander about the island, digging here or there for the treasure. The easier search, by contrast, is to have a treasure map in which "x marks the spot" where the treasure is located, and where you simply follow the map to the treasure.
But where did you get that treasure map? Mapmakers have made lots of maps of that island, and for every map that accurately marks the treasure's location, there are many many others that incorrectly mark its location. Indeed, for any place on the island, there's a map that marks it with an "x." So how do you find your way among all these maps to one that correctly marks the treasure's location? Evidently, the search for the treasure has been displaced to a search for a map that locates the treasure. Each map corresponds to a search, and locating the right map corresponds to a search for a search (abbreviated, in the conservation of information literature, as S4S).
Conservation of information, in this example, says that the probability of locating the treasure by first searching for a treasure map that accurately identifies the treasure's location is no greater, and may be less, than the probability of locating the treasure simply by blind search. This implies that the easier search (i.e., the search with treasure map in hand), once the cost of finding it is factored in, has not made the actual overall search any easier. In general, conservation of information says that when a more difficult search gets displaced by an easier search, the probability of finding the target by first finding the easier search and then using the easier search to find the target is no greater, and often is less, than the probability of finding the target directly with the more difficult search.
In the Spirit of "No Free Lunch"
Anybody familiar with the No Free Lunch (NFL) theorems will immediately see that conservation of information is very much in the same spirit. The upshot of the NFL theorems is that no evolutionary search outperforms blind search once the information inherent in fitness (i.e., the fitness landscape) is factored out. NFL is a great equalizer. It says that all searches are essentially equivalent to blind search when looked at not from the vantage of finding a particular target but when averaged across the different possible targets that might be searched.
If NFL tends toward egalitarianism by arguing that no search is, in itself, better than blind search when the target is left unspecified, conservation of information tends toward elitism by making as its starting point that some searches are indeed better than others (especially blind search) at locating particular targets. Yet, conservation of information quickly adds that the elite status of such searches is not due to any inherent merit of the search (in line with NFL) but to information that the search is employing to boost its performance.
Some searches do better, indeed much better, than blind search, and when they do, it is because they are making use of target-specific information. Conservation of information calculates the information cost of this performance increase and shows how it must be counterbalanced by a loss in search performance elsewhere (specifically, by needing to search for the information that boosts search performance) so that global performance in locating the target is not improved and may in fact diminish.
Conservation of information, in focusing on search for the information needed to boost search performance, suggests a relational ontology between search and objects being searched. In a relational ontology, things are real not as isolated entities but in virtue of their relation to other things. In the relational ontology between search and the objects being searched, each finds its existence in the other. Our natural tendency is to think of objects as real and search for those objects as less real in the sense that search depends on the objects being searched but objects can exist independently of search. Yet objects never come to us in themselves but as patterned reflections of our background knowledge, and thus as a target of search.
Any scene, indeed any input to our senses, reaches our consciousness only by aspects becoming salient, and this happens because certain patterns in our background knowledge are matched to the exclusion of others. In an extension of George Berkeley's "to be is to be perceived," conservation of information suggests that "to be perceived is to be an object of search." By transitivity of reasoning, it would then follow that to be is to be an object of search. And since search is always search for an object, search and the object of search become, in this way of thinking, mutually ontologizing, giving existence to each other. Conservation of information then adds to this by saying that search can itself be an object of search.
Most relational ontologies are formulated in terms of causal accessibility, so that what renders one thing real is its causal accessibility to another thing. But since search is properly understood probabilistically, the form of accessibility relevant to a relational ontology grounded in search is probabilistic. Probabilistic rather than causal accessibility grounds the relational ontology of search. Think of a needle in a haystack, only imagine the needle is the size of an electron and the haystack is the size of the known physical universe. Searches with such a small probability of success via blind or random search are common in biology. Biological configuration spaces of possible genes and proteins, for instance, are immense, and finding a functional gene or protein in such spaces via blind search can be vastly more improbable than finding an arbitrary electron in the known physical universe.
Why the Multiverse Is Incoherent
Given needles this tiny in haystacks this large, blind search is effectively incapable of finding a needle in a haystack. Success, instead, requires a search that vastly increases the probability of finding the needle. But where does such a search come from? And in what sense does the needle exist apart from such a search. Without a search that renders finding the needle probable, the needle might just as well not exist. And indeed, we would in all probability not know that it exists except for a search that renders it probable. This, by the way, is why I regard the multiverse as incoherent: what renders the known physical universe knowable is that it is searchable. The multiverse, by contrast, is unsearchable. In a relational ontology that makes search as real as the objects searched, the multiverse is unreal.
These considerations are highly germane to evolutionary biology, which treats evolutionary search as a given, as something that does not call for explanation beyond the blind forces of nature. But insofar as evolutionary search renders aspects of a biological configuration space probabilistically accessible where previously, under blind search, they were probabilistically inaccessible, conservation of information says that evolutionary search achieves this increase in search performance at an informational cost. Accordingly, the evolutionary search, which improves on blind search, had to be found through a higher-order search (i.e., a search for a search, abbreviated S4S), which, when taken into account, does not make the evolutionary search any more effective at finding the target than the original blind search.
Given this background discussion and motivation, we are now in a position to give a reasonably precise formulation of conservation of information, namely: raising the probability of success of a search does nothing to make attaining the target easier, and may in fact make it more difficult, once the informational costs involved in raising the probability of success are taken into account. Search is costly, and the cost must be paid in terms of information. Searches achieve success not by creating information but by taking advantage of existing information. The information that leads to successful search admits no bargains, only apparent bargains that must be paid in full elsewhere.
For a "Made Simple" paper on conservation of information, this is about as much as I want to say regarding a precise statement of conservation of information. Bob Marks and I have proved several technical conservation of information theorems (see the publications page at www.evoinfo.org). Each of these looks at some particular mathematical model of search and shows how raising the probability of success of a search by a factor of q/p (> 1) incurs an information cost not less than log(q/p), or, equivalently, a probability cost of not more than p/q. If we therefore start with a search having probability of success p and then raise it to q, the actual probability of finding the target is not q but instead is less than or equal to q multiplied by p/q, or, therefore, less than or equal to p, which is just the original search difficulty. Accordingly, raising the probability of success of a search contributes nothing toward finding the target once the information cost of raising the probability is taken into account.
Conservation of information, however, is not just a theorem or family of theorems but also a general principle or law (recall Medawar's "Law of Conservation of Information"). Once enough such theorems have been proved and once their applicability to a wide range of search problems has been repeatedly demonstrated (the Evolutionary Informatics Lab has, for instance, shown how such widely touted evolutionary algorithms as AVIDA, ev, Tierra, and Dawkins's WEASEL all fail to create but instead merely redistribute information), conservation of information comes to be seen not as a narrow, isolated result but as a fundamental principle or law applicable to search in general. This is how we take conservation of information.
Instead of elaborating the underlying theoretical apparatus for conservation of information, which is solid and has appeared now in a number of peer-reviewed articles in the engineering and mathematics literature (see the publications page at www.evoinfo.org -- it's worth noting that none of the critiques of this work has appeared in the peer-reviewed scientific/engineering literature, although a few have appeared in the philosophy of science literature, such as Philosophy and Biology; most of the critiques are Internet diatribes), I want next to illustrate conservation of information as it applies to one of the key examples touted by evolutionists as demonstrating the information-generating powers of evolutionary processes. Once I've done that, I want to consider what light conservation of information casts on evolution generally.
An Economist Is Stranded on an Island
To set the stage, consider an old joke about an economist and several other scientists who are stranded on an island and discover a can of beans. Hungry, they want to open it. Each looks to his area of expertise to open the can. The physicist calculates the trajectory of a projectile that would open the can. The chemist calculates the heat from a fire needed to burst the can. And so on. Each comes up with a concrete way to open the can given the resources on the island. Except the economist. The economist's method of opening the can is the joke's punch line: suppose a can opener. There is, of course, no can opener on the island.
The joke implies that economists are notorious for making assumptions to which they are unentitled. I don't know enough about economists to know whether this is true, but I do know that this is the case for many evolutionary biologists. The humor in the economist's proposed solution of merely positing a can opener, besides its jab at the field of economics, is the bizarre image of a can opener coming to the rescue of starving castaways without any warrant whatsoever for its existence. The economist would simply have the can opener magically materialize. The can opener is, essentially, a deus ex machina.
Interestingly, the field of evolutionary biology is filled with deus ex machinas (yes, I've taken Latin and know that this is not the proper plural of deus ex machina, which is dei ex machinis; but this is a "made simple" paper meant for the unwashed masses, of which I'm a card-carrying member). Only the evolutionary biologist is a bit more devious about employing, or should I say deploying, deus ex machinas than the economist. Imagine our economist counseling someone who's having difficulty repaying a juice loan to organized crime. In line with the advice he gave on the island, our economist friend might give the following counsel: suppose $10,000 in cash.
$10,000 might indeed pay the juice loan, but that supposition seems a bit crude. An evolutionary biologist, to make his advice appear more plausible, would add a layer of complexity to it: suppose a key to a safety deposit box with $10,000 cash inside it. Such a key is just as much a deus ex machina as the $10,000 in cash. But evolutionary biology has long since gained mastery in deploying such devices as well as gaining the right to call their deployment "science."
I wish I were merely being facetious, but there's more truth here than meets the eye. Consider Richard Dawkins' well known METHINKS IT IS LIKE A WEASEL example (from his 1986 book The Blind Watchmaker), an example endlessly repeated and elaborated by biologists trying to make evolution seem plausible, the most notable recent rendition being by RNA-worlds researcher Michael Yarus in his 2010 book Life from an RNA World (Yarus's target phrase, unlike Dawkins's, which is drawn from Shakespeare's Hamlet, is Theodosius Dozhansky's famous dictum NOTHING IN BIOLOGY MAKES SENSE EXCEPT IN THE LIGHT OF EVOLUTION).
A historian or literature person, confronted with METHINKS IT IS LIKE A WEASEL, would be within his rights to say, suppose that there was a writer named William Shakespeare who wrote it. And since the person and work of Shakespeare have been controverted (was he really a she? did he exist at all? etc.), this supposition is not without content and merit. Indeed, historians and literature people make such suppositions all the time, and doing so is part of what they get paid for. Are the Homeric poems the result principally of a single poet, Homer, or an elaboration by a tradition of bards? Did Moses write the Pentateuch or is it the composite of several textual traditions, as in the documentary hypothesis? Did Jesus really exist? (Dawkins and his fellow atheists seriously question whether Jesus was an actual figure of history; cf. the film The God Who Wasn't There).
For the target phrase METHINKS IT IS LIKE A WEASEL, Dawkins bypasses the Shakespeare hypothesis -- that would be too obvious and too intelligent-design friendly. Instead of positing Shakespeare, who would be an intelligence or designer responsible for the text in question (designers are a no-go in conventional evolutionary theory), Dawkins asks his readers to suppose an evolutionary algorithm that evolves the target phrase. But such an evolutionary algorithm privileges the target phrase by adapting the fitness landscape so that it assigns greater fitness to phrases that have more corresponding letters in common with the target.
And where did that fitness landscape come from? Such a landscape potentially exists for any phrase whatsoever, and not just for METHINKS IT IS LIKE A WEASEL. Dawkins's evolutionary algorithm could therefore have evolved in any direction, and the only reason it evolved to METHINKS IT IS LIKE A WEASEL is that he carefully selected the fitness landscape to give the desired result. Dawkins therefore got rid of Shakespeare as the author of METHINKS IT IS LIKE A WEASEL, only to reintroduce him as the (co)author of the fitness landscape that facilitates the evolution of METHINKS IT IS LIKE A WEASEL.
The bogusness of this example, with its sleight-of-hand misdirection, has been discussed ad nauseam by me and my colleagues in the ID community. We've spent so much time and ink on this example not because of its intrinsic merit, but because the evolutionary community itself remains so wedded to it and endlessly repeats its underlying fallacy in ever increasingly convoluted guises (AVIDA, Tierra, ev, etc.). For a careful deconstruction of Dawkins's WEASEL, providing a precise simulation under user control, see the "Weasel Ware" project on the Evolutionary Informatics website: www.evoinfo.org/weasel.
How does conservation of information apply to this example? Straightforwardly. Obtaining METHINKS IT IS LIKE A WEASEL by blind search (e.g., by randomly throwing down Scrabble pieces in a line) is extremely improbable. So Dawkins proposes an evolutionary algorithm, his WEASEL program, to obtain this sequence with higher probability. Yes, this algorithm does a much better job, with much higher probability, of locating the target. But at what cost? At an even greater improbability cost than merely locating the target sequence by blind search.
Dawkins completely sidesteps this question of information cost. Foreswearing any critical examination of the origin of the information that makes his simulation work, he attempts instead, by rhetorical tricks, simply to induce in his readers a stupefied wonder at the power of evolution: "Gee, isn't it amazing how powerful evolutionary processes are given that they can produce sentences like METHINKS IT IS LIKE A WEASEL, which ordinarily require human intelligence." But Dawkins is doing nothing more than advise our hapless borrower with the juice loan to suppose a key to a safety deposit box with the money needed to pay it off. Whence the key? Likewise, whence the fitness landscape that rendered the evolution of METHINKS IT IS LIKE A WEASEL probable? In terms of conservation of information, the necessary information was not internally created but merely smuggled in, in this case, by Dawkins himself.
An Email Exchange with Richard Dawkins
Over a decade ago, I corresponded with Dawkins about his WEASEL computer simulation. In an email to me dated May 5, 2000, he responded to my criticism of the teleology hidden in that simulation. Note that he does not respond to the challenge of conservation of information directly, nor had I developed this idea with sufficient clarity at the time to use it in refutation. More on this shortly. Here's what he wrote, exactly as he wrote it:
The point about any phrase being equally eligible to be a target is covered on page 7 [of The Blind Watchmaker]: "Any old jumbled collection of parts is unique and, WITH HINDSIGHT, is as improbable as any other . . ." et seq.
More specifically, the point you make about the Weasel, is admitted, without fuss, on page 50: "Although the monkey/Shakespeare model is useful for explaining the distinction between single-step selection and cumulative selection, it is misleading in important ways. One of these is that, in each generation of selective 'breeding', the mutant 'progeny' phrases were judged according to the criterion of resemblance to a DISTANT IDEAL target ... Life isn't like that."
In real life of course, the criterion for optimisation is not an arbitrarily chosen distant target but SURVIVAL. It's as simple as that. This is non-arbitrary. See bottom of page 8 to top of page 9. And it's also a smooth gradient, not a sudden leap from a flat plain in the phase space. Or rather it must be a smooth gradient in all those cases where evolution has actually happened. Maybe there are theoretical optima which cannot be reached because the climb is too precipitous.
The Weasel model, like any model, was supposed to make one point only, not be a complete replica of the real thing. I invented it purely and simply to counter creationists who had naively assumed that the phase space was totally flat except for one vertical peak (what I later represented as the precipitous cliff of Mount Improbable). The Weasel model is good for refuting this point, but it is misleading if it is taken to be a complete model of Darwinism. That is exactly why I put in the bit on page 50.
Perhaps you should look at the work of Spiegelman and others on evolution of RNA molecules in an RNA replicase environment. They have found that, repeatedly, if you 'seed' such a solution with an RNA molecule, it will converge on a particular size and form of 'optimal' replicator, sometimes called Spiegelman's minivariant. Maynard Smith gives a good brief account of it in his The Problems of Biology (see Spiegelman in the index). Orgel extended the work, showing that different chemical environments select for different RNA molecules.
The theory is so beautiful, so powerful. Why are you people so wilfully blind to its simple elegance? Why do you hanker after "design" when surely you must see that it doesn't explain anything? Now THAT's what I call a regress. You are a fine one to talk about IMPORTING complexity. "Design" is the biggest import one could possibly imagine.
Dawkins's email raises a number of interesting questions that, in the years since, have received extensive discussion among the various parties debating intelligent design. The who-designed-the-designer regress, whether a designing intelligence must itself be complex in the same way that biological systems are complex, the conditions under which evolution is complexity-increasing vs. complexity-decreasing, the evolutionary significance of Spiegelman's minivariants, and how the geometry of the fitness landscape facilitates or undercuts evolution have all been treated at length in the design literature and won't be rehearsed here (for more on these questions, see my books No Free Lunch and The Design Revolution as well as Michael Behe's The Edge of Evolution).
"Just One Word: Plastics"
Where I want to focus is Dawkins's one-word answer to the charge that his WEASEL simulation incorporates an unwarranted teleology -- unwarranted by the Darwinian understanding of evolution for which his Blind Watchmaker is an apologetic. The key line in the above quote is, "In real life of course, the criterion for optimisation is not an arbitrarily chosen distant target but SURVIVAL." Survival is certainly a necessary condition for life to evolve. If you're not surviving, you're dead, and if you're dead, you're not evolving -- period. But to call "survival," writ large, a criterion for optimization is ludicrous. As I read this, I have images of Dustin Hoffman in The Graduate being taken aside at a party by an executive who is about to reveal the secret of success: PLASTICS (you can watch the clip by clicking here). For the greatest one-word simplistic answers ever given, Dawkins's ranks right up there.
But perhaps I'm reading Dawkins uncharitably. Presumably, what he really means is differential survival and reproduction as governed by natural selection and random variation. Okay, I'm willing to buy that this is what he means. But even on this more charitable reading, his characterization of evolution is misleading and wrong. Ken Miller elaborates on this more charitable reading in his recent book Only a Theory. There he asks what's needed to drive the increase in biological information over the course of evolution. His answer? "Just three things: selection, replication, and mutation... Where the information 'comes from' is, in fact, from the selective process itself."
It's easy to see that Miller is blowing smoke even without the benefits of modern information theory. All that's required is to understand some straightforward logic, uncovered in Darwin's day, about the nature of scientific explanation in teasing apart possible causes. Indeed, biology's reception of Darwinism might have been far less favorable had scientists paid better attention to Darwin's contemporary John Stuart Mill. In 1843, sixteen years before the publication of Darwin's Origin of Species, Mill published the first edition of his System of Logic (which by the 1880s had gone through eight editions). In that work Mill lays out various methods of induction. The one that interests us here is his method of difference. In his System of Logic, Mill described this method as follows:
If an instance in which the phenomenon under investigation occurs, and an instance in which it does not occur, have every circumstance in common save one, that one occurring only in the former; the circumstance in which alone the two instances differ is the effect, or the cause, or an indispensable part of the cause, of the phenomenon.
Essentially, this method says that to discover which of a set of circumstances is responsible for an observed difference in outcomes requires finding a difference in the circumstances. An immediate corollary is that common circumstances cannot explain a difference in outcomes. Thus, if one person is sober and another drunk, and if both ate chips, salsa, and popcorn, this fact, common to both, does not, and indeed cannot, explain the difference. Rather, the difference is explained by one abstaining from alcohol and the other drinking too much. Mill's method of difference, so widely used in everyday life as well as in science, is crucially relevant to evolutionary biology. In fact, it helps bring some sense of proportion and reality to the inflated claims so frequently made on behalf of Darwinian processes.
Case in point: Miller's overselling of Darwinian evolution by claiming that "what's needed to drive" increases in biological information is "just three things: selection, replication, and mutation." Mill's method of difference gives the lie to Miller's claim. It's easy to write computer simulations that feature selection, replication, and mutation (or SURVIVAL writ large, or differential survival and reproduction, or any such reduction of evolution to Darwinian principles) -- and that go absolutely nowhere. Taken together, selection, replication, and mutation are not a magic bullet, and need not solve any interesting problems or produce any salient patterns. That said, evolutionary computation does get successfully employed in the field of optimization, so it is possible to write computer simulations that feature selection, replication, and mutation and that do go somewhere, solving interesting problems or producing salient patterns. But precisely because selection, replication, and mutation are common to all such simulations, they cannot, as Mill's method underscores, account for the difference.
One Boeing engineer used to call himself a "penalty-function artist." A penalty function is just another term for fitness landscape (though the numbers are reversed -- the higher the penalty, the lower the fitness). Coming up with the right penalty functions enabled this person to solve his engineering problems. Most such penalty functions, however, are completely useless. Moreover, all such functions operate within the context of an evolutionary computing environment that features Miller's triad of selection, replication, and mutation. So what makes the difference? It's that the engineer, with knowledge of the problem he's trying to solve, carefully adapts the penalty function to the problem and thereby raises the probability of successfully finding a solution. He's not just choosing his penalty functions willy-nilly. If he did, he wouldn't be working at Boeing. He's an artist, and his artistry (intelligent design) consists in being able to find the penalty functions that solve his problems.
I've corresponded with both Miller and Dawkins since 2000. Miller and I have sparred on a number of occasions in public debate (as recently as June 2012, click here). Dawkins refuses all such encounters. Regardless, we are familiar with each other's work, and yet I've never been able to get from either of them a simple admission that the logic in Mill's method of difference is valid and that it applies to evolutionary theory, leaving biology's information problem unresolved even after the Darwinian axioms of selection, replication, and variation are invoked.
John Stuart Mill's Inconvenient Truth
Instead, Miller remains an orthodox Darwinist, and Dawkins goes even further, embracing a universal Darwinism that sees Darwinian evolution as the only conceivable scientific explanation of life's diversification in natural history. As he wrote in The Blind Watchmaker and continues to believe:
My argument will be that Darwinism is the only known theory that is in principle capable of explaining certain aspects of life. If I am right it means that, even if there were no actual evidence in favor of the Darwinian theory (there is, of course) we should still be justified in preferring it over all rival theories.
Mill's method of difference is an inconvenient truth for Dawkins and Miller, but it's a truth that must be faced. For his willingness to face this truth, I respect Stuart Kauffman infinitely more than either Miller or Dawkins. Miller and Dawkins are avid Darwinists committed to keeping the world safe for their patron saint. Kauffman is a free spirit, willing to admit problems where they arise. Kauffman at least sees that there is a problem in claiming that the Darwinian mechanism can generate biological information, even if his own self-organizational approach is far from resolving it. As Kauffman writes in Investigations:
If mutation, recombination, and selection only work well on certain kinds of fitness landscapes, yet most organisms are sexual, and hence use recombination, and all organisms use mutation as a search mechanism, where did these well-wrought fitness landscapes come from, such that evolution manages to produce the fancy stuff around us?
According to Kauffman, "No one knows."
Kauffman's observation here is entirely in keeping with conservation of information. Indeed, he offers this observation in the context of discussing the No Free Lunch theorems, of which conservation of information is a logical extension. The fitness landscape supplies the evolutionary process with information. Only finely tuned fitness landscapes that are sufficiently smooth, don't isolate local optima, and, above all, reward ever-increasing complexity in biological structure and function are suitable for driving a full-fledged evolutionary process. So where do such fitness landscapes come from? Absent an extrinsic intelligence, the only answer would seem to be the environment.
Just as I have heard SURVIVAL as a one-word resolution to the problem of generating biological information, so also have I heard ENVIRONMENT. Ernan McMullin, for instance, made this very point to me over dinner at the University of Chicago in 1999, intoning this word ("environment") as though it were the solution to all that ails evolution. Okay, so the environment supplies the information needed to drive biological evolution. But where did the environment get that information? From itself? The problem with such an answer is this: conservation of information entails that, without added information, biology's information problem remains constant (breaks even) or intensifies (gets worse) the further back in time we trace it.
The whole magic of evolution is that it's supposed to explain subsequent complexity in terms of prior simplicity, but conservation of information says that there never was a prior state of primordial simplicity -- the information, absent external input, had to be there from the start. It is no feat of evolutionary theorizing to explain how cavefish lost the use of their eyes after long periods of being deprived of light. Functioning eyes turning into functionless eye nubs is a devolution from complexity to simplicity. As a case of use-it-or-lose-it, it does not call for explanation. Evolution wins plaudits for purporting to explain how things like eyes that see can evolve in the first place from prior simpler structures that cannot see.
If the evolutionary process could indeed create such biological information, then evolution from simplicity to complexity would be unproblematic. But the evolutionary process as conceived by Darwin and promulgated by his successors is non-teleological. Accordingly, it cannot employ the activity of intelligence in any guise to increase biological information. But without intelligent input, conservation of information implies that as we regress biological information back in time, the amount of information to be accounted for never diminishes and may actually increase.
Explaining Walmart's Success by Invoking Interstate Highways
Given conservation of information and the absence of intelligent input, biological information with the complexity we see now must have always been present in the universe in some form or fashion, going back even as far as the Big Bang. But where in the Big Bang, with a heat and density that rule out any life form in the early history of the universe, is the information for life's subsequent emergence and development on planet Earth? Conservation of information says this information has to be there, in embryonic form, at the Big Bang and at every moment thereafter. So where is it? How is it represented? In the environment, you say? Invoking the environment as evolution's information source is empty talk, on the order of invoking the interstate highway system as the reason for Walmart's business success. There is some connection, to be sure, but neither provides real insight or explanation.
To see more clearly what's at stake here, imagine Scrabble pieces arranged in sequence to spell out meaningful sentences (such as METHINKS IT IS LIKE A WEASEL). Suppose a machine with suitable sensors, movable arms, and grips, takes the Scrabble pieces out of a box and arranges them in this way. To say that the environment has arranged the Scrabble pieces to spell out meaningful sentences is, in this case, hardly illuminating. Yes, broadly speaking, the environment is arranging the pieces into meaningful sentences. But, more precisely, a robotic machine, presumably running a program with meaningful sentences suitably coded, is doing the arranging.
Merely invoking the environment, without further amplification, therefore explains nothing about the arrangement of Scrabble pieces into meaningful sentences. What exactly is it about the environment that accounts for the information conveyed in those arrangements of Scrabble pieces? And what about the environment accounts for the information conveyed in the organization of biological systems? That's the question that needs to be answered. Without an answer to this question, appeals to the environment are empty and merely cloak our ignorance of the true sources of biological information.
With a machine that arranges Scrabble pieces, we can try to get inside it and see what it does ("Oh, there's the code that spells out METHINKS IT IS LIKE A WEASEL"). With the actual environment for biological evolution, we can't, as it were, get under the hood of the car. We see natural forces such as wind, waves, erosion, lightning, Brownian motion, attraction, repulsion, bonding affinities and the like. And we see slippery slopes on which one organism thrives and another founders. If such an environment were arranging Scrabble pieces in sequence, we would observe the pieces blown by wind or jostled by waves or levitated by magnets. And if, at the end of the day, we found Scrabble pieces spelling out coherent English sentences, such as METHINKS IT IS LIKE A WEASEL, we would be in our rights to infer that an intelligence had in some way co-opted the environment and inserted information, even though we have no clue how.
Such a role for the environment, as an inscrutable purveyor of information, is, however, unacceptable to mainstream evolutionary theorists. In their view, the way the environment inputs information into biological systems over the course of evolution is eminently scrutable. It happens, so they say, by a gradual accumulation of information as natural selection locks in on small advantages, each of which can arise by chance without intelligent input. But what's the evidence here?
This brings us back to the knock-out experiments that Ken Miller has repeatedly put forward to refute intelligent design, in which a structure responsible for a function has been disabled and then, through selection pressure, it, or something close to it capable of the lost function, gets recovered. In all his examples, there is no extensive multi-step sequence of structural changes each of which lead to a distinct functional advantage. Usually, it's just a single nucleotide base or amino acid change that's needed to recover function.
This is true even with the evolution of nylonase, mentioned earlier. Nylonase is not the result of an entirely new DNA sequence coding for that enzyme. Rather, it resulted from a frameshift in existing DNA, shifting over some genetic letters and thus producing the gene for nylonase. The origin of nylonase is thus akin to changing the meaning of "therapist" by inserting a space and getting "the rapist." For the details about the evolution of nylonase, see a piece I did in response to Miller at Uncommon Descent (click here).
The Two-Pronged Challenge of Intelligent Design
Intelligent design has always mounted a two-pronged challenge to conventional evolutionary theory. On the one hand, design proponents have challenged common ancestry. Discontinuities in the fossil record and in supposed molecular phylogenies have, for many of us (Michael Behe has tended to be the exception), made common ancestry seem far from compelling. Our reluctance here is not an allergic reaction but simply a question of evidence -- many of us in the ID community see the evidence for common ancestry as weak, especially when one leaves the lower taxonomic groupings and moves to the level of orders, classes, and, above all, phyla (as with the Cambrian explosion, in which all the major animal phyla appear suddenly, lacking evident precursors in the Precambrian rocks). And indeed, if common ancestry fails, so does conventional evolutionary theory.
On the other hand, design proponents have argued that even if common ancestry holds, the evidence of intelligence in biology is compelling. Conservation of information is part of that second-prong challenge to evolution. Evolutionary theorists like Miller and Dawkins think that if they can break down the problem of evolving a complex biological system into a sequence of baby-steps, each of which is manageable by blind search (e.g., point mutations of DNA) and each of which confers a functional advantage, then the evidence of design vanishes. But it doesn't. Regardless of the evolutionary story told, conservation of information shows that the information in the final product had to be there from the start.
It would actually be quite a remarkable property of nature if fitness across biological configuration space were so distributed that advantages could be cumulated gradually by a Darwinian process. Frankly, I don't see the evidence for this. The examples that Miller cites show some small increases in information associated with recovering and enhancing a single biological function but hardly the massive ratcheting up of information in which structures and functions co-evolve and lead to striking instances of biological invention. The usual response to my skepticism is, Give evolution more time. I'm happy to do that, but even if time allows evolution to proceed much more impressively, the challenge that conservation of information puts to evolution remains.
In the field of technological (as opposed to biological) evolution, revolutionary new inventions never result by gradual tinkering with existing technologies. Existing technologies may, to be sure, be co-opted for use in a revolutionary technology. Thus, when Alexander Graham Bell invented the telephone, he used existing technologies such as wires, electrical circuits, and diaphragms. But these were put together and adapted for a novel, and at the time unprecedented, use.
But what if technological evolution proceeded in the same way that, as we are told, biological evolution proceeds, with inventions useful to humans all being accessible by gradual tinkering from one or a few primordial inventions? One consequence would be that tinkerers who knew nothing about the way things worked but simply understood what it was to benefit from a function could become inventors on the order of Bell and Edison. More significantly, such a state of affairs would also indicate something very special about the nature of human invention, namely, that it was distributed continuously across technological configuration space. This would be remarkable. Granted, we don't see this. Instead, we see sharply disconnected islands of invention inaccessible to one another by mere gradual tinkering. But if such islands were all connected (by long and narrow isthmuses of function), it would suggest a deeper design of technological configuration space for the facilitation of human invention.
The same would be true of biological invention. If biological evolution proceeds by a gradual accrual of functional advantages, instead of finding itself deadlocked on isolated islands of function surrounded by vast seas of non-function, then the fitness landscape over biological configuration space has to be very special indeed (recall Stuart Kauffman's comments to that effect earlier in this piece). Conservation of information goes further and says that any information we see coming out of the evolutionary process was already there in this fitness landscape or in some other aspect of the environment or was inserted by an intervening intelligence. What conservation of information guarantees did not happen is that the evolutionary process created this information from scratch.
Some years back I had an interesting exchange with Simon Conway Morris about the place of teleology in evolution. According to him, the information that guides the evolutionary process is embedded in nature and is not reducible to the Darwinian mechanism of selection, replication, and mutation. He stated this forthrightly in an email to me dated February 20, 2003, anticipating his then forthcoming book Life's Solution. I quote this email rather than the book because it clarifies his position better than anything that I've read from him subsequently. Here's the quote from his email:
As it happens, I am not sure we are so far apart, at least in some respects. Both of us, I imagine, accept that we are part of God's good Creation, and that despite its diversity, by no means all things are possible. In my forthcoming book Life's Solution (CUP) I argue that hard-wired into the universe are such biological properties of intelligence. This implies a "navigation" by evolution across immense "hyperspaces" of biological alternatives, nearly all of which are maladaptive [N.B. -- this means the adaptive hyperspaces form a very low-probability target!]. These thin roads (or "worm-holes") of evolution define a deeper biological structure, the principal evidence for which is convergence (my old story). History and platonic archetypes, if you like, meet. That does seem to me to be importantly distinct from ID: my view of Creation is not only very rich (self-evidently), but has an underlying structure that allows evolution to act. Natural selection, after all, is only a mechanism; what we surely agree about is the nature of the end-products, even if we disagree as to how they came about. Clearly my view is consistent with a Christian world picture, but can never be taken as proof.
There's not much I disagree with here. My one beef with Conway Morris is that he's too hesitant about finding evidence (what he calls "proof") for teleology in the evolutionary process. I critique this hesitancy in my review of Life's Solution for Books & Culture, a review that came out the year after this email (click here for the review). Conway Morris's fault is that he does not follow his position through to its logical conclusion. He prefers to critique conventional evolutionary theory, with its tacit materialism, from the vantage of theology and metaphysics. Convergence points to a highly constrained evolutionary process that's consistent with divine design. Okay, but there's more.
If evolution is so tightly constrained and the Darwinian mechanism of natural selection is just that, a mechanism, albeit one that "navigates immense hyperspaces of biological alternatives" by confining itself to "thin roads of evolution defining a deeper biological structure," then, in the language of conservation of information, the conditions that allow evolution to act effectively in producing the complexity and diversity of life is but a tiny subset, and therefore a small-probability target, among all the conditions under which evolution might act. And how did nature find just those conditions? Nature has, in that case, embedded in it not just a generic evolutionary process employing selection, replication, and mutation, but one that is precisely tuned to produce the exquisite adaptations, or, dare I say, designs, that pervade biology.
Where Conway Morris merely finds consistency with his Christian worldview (tempered by a merger of Darwin and Plotinus), conservation of information shows that the evolutionary process has embedded in it rich sources of information that a thoroughgoing materialism cannot justify and has no right to expect. The best such a materialism can do is count it a happy accident that evolution acts effectively, producing ever increasing biological complexity and diversity, when most ways it might act would be ineffective, producing no life at all or ecosystems that are boring (a disproportion mirrored in the evolutionary computing literature, where most fitness landscapes are maladaptive).
The Lesson of Conservation of Information
The improbabilities associated with rendering evolution effective are therefore no more tractable than the improbabilities that face an evolutionary process dependent purely on blind search. This is the relevance of conservation of information for evolution: it shows that the vast improbabilities that evolution is supposed to mitigate in fact never do get mitigated. Yes, you can reach the top of Mount Improbable, but the tools that enable you to find a gradual ascent up the mountain are as improbably acquired as simply scaling it in one fell swoop. This is the lesson of conservation of information.
One final question remains, namely, what is the source of information in nature that allows targets to be successfully searched? If blind material forces can only redistribute existing information, then where does the information that allows for successful search, whether in biological evolution or in evolutionary computing or in cosmological fine-tuning or wherever, come from in the first place? The answer will by now be obvious: from intelligence. On materialist principles, intelligence is not real but an epiphenomenon of underlying material processes. But if intelligence is real and has real causal powers, it can do more than merely redistribute information -- it can also create it.
Indeed, that is the defining property of intelligence, its ability to create information, especially information that finds needles in haystacks. This fact should be more obvious and convincing to us than any fact of the natural sciences since (1) we ourselves are intelligent beings who create information all the time through our thoughts and language and (2) the natural sciences themselves are logically downstream from our ability to create information (if we were not information creators, we could not formulate our scientific theories, much less search for those that are empirically adequate, and there would be no science). Materialist philosophy, however, has this backwards, making a materialist science primary and then defining our intelligence out of existence because materialism leaves no room for it. The saner course would be to leave no room for materialism.
I close with a quote from Descartes, whose substance dualism notwithstanding, rightly understood that intelligence could never be reduced to brute blind matter acting mechanistically. The quote is from his Discourse on Method. As you read it, bear in mind that for the materialist, everything is a machine, be it themselves, the evolutionary process, or the universe taken as a whole. Everything, for the materialist, is just brute blind matter acting mechanistically. Additionally, as you read this, bear in mind that conservation of information shows that this materialist vision is fundamentally incomplete, unable to account for the information that animates nature. Here is the quote:
Although machines can perform certain things as well as or perhaps better than any of us can do, they infallibly fall short in others, by which means we may discover that they did not act from knowledge, but only from the disposition of their organs. For while reason is a universal instrument which can serve for all contingencies, these organs have need of some special adaptation for every particular action. From this it follows that it is morally impossible that there should be sufficient diversity in any machine to allow it to act in all the events of life in the same way as our reason causes us to act.
William A. Dembski August 28, 2012 3:59 PM
In the 1970s, Doubleday published a series of books with the title "Made Simple." This series covered a variety of academic topics (Statistics Made Simple, Philosophy Made Simple, etc.). The 1980s saw the "For Dummies" series, which expanded the range of topics to include practical matters such as auto repair. The "For Dummies" series has since been replicated, notably by guides for "Complete Idiots." All books in these series attempt, with varying degrees of success, to break down complex subjects, helping students to learn a topic, especially when they've been stymied by more conventional approaches and textbooks.
In this article, I'm going to follow the example of these books, laying out as simply and clearly as I can what conservation of information is and why it poses a challenge to conventional evolutionary thinking. I'll break this concept down so that it seems natural and straightforward. Right now, it's too easy for critics of intelligent design to say, "Oh, that conservation of information stuff is just mumbo-jumbo. It's part of the ID agenda to make a gullible public think there's some science backing ID when it's really all smoke and mirrors." Conservation of information is not a difficult concept and once it is understood, it becomes clear that evolutionary processes cannot create the information required to power biological evolution.
Conservation of Information: A Brief History
Conservation of information is a term with a short history. Biologist Peter Medawar used it in the 1980s to refer to mathematical and computational systems that are limited to producing logical consequences from a given set of axioms or starting points, and thus can create no novel information (everything in the consequences is already implicit in the starting points). His use of the term is the first that I know, though the idea he captured with it is much older. Note that he called it the "Law of Conservation of Information" (see his The Limits of Science, 1984).
Computer scientist Tom English, in a 1996 paper, also used the term conservation of information, though synonymously with the then recently proved results by Wolpert and Macready about No Free Lunch (NFL). In English's version of NFL, "the information an optimizer gains about unobserved values is ultimately due to its prior information of value distributions." As with Medawar's form of conservation of information, information for English is not created from scratch but rather redistributed from existing sources.
Conservation of information, as the idea is being developed and gaining currency in the intelligent design community, is principally the work of Bob Marks and myself, along with several of Bob's students at Baylor (see the publications page at www.evoinfo.org). Conservation of information, as we use the term, applies to search. Now search may seem like a fairly restricted topic. Unlike conservation of energy, which applies at all scales and dimensions of the universe, conservation of information, in focusing on search, may seem to have only limited physical significance. But in fact, conservation of information is deeply embedded in the fabric of nature, and the term does not misrepresent its own importance.
Search is a very general phenomenon. The reason we don't typically think of search in broad terms applicable to nature generally is that we tend to think of it narrowly in terms of finding a particular predefined object. Thus our stock example of search is losing one's keys, with search then being the attempt to recover them. But we can also search for things that are not pre-given in this way. Sixteenth-century explorers were looking for new, uncharted lands. They knew when they found them that their search had been successful, but they didn't know exactly what they were looking for. U2 has a song titled "I Still Haven't Found What I'm Looking For." How will Bono know once he's found what he's looking for? Often we know that we've found it even though it's nothing like what we expected, and sometimes even violates our expectations.
Another problem with extending search to nature in general is that we tend to think of search as confined to human contexts. Humans search for keys, and humans search for uncharted lands. But, as it turns out, nature is also quite capable of search. Go to Google and search on the term "evolutionary search," and you'll get quite a few hits. Evolution, according to some theoretical biologists, such as Stuart Kauffman, may properly be conceived as a search (see his book Investigations). Kauffman is not an ID guy, so there's no human or human-like intelligence behind evolutionary search as far as he's concerned. Nonetheless, for Kauffman, nature, in powering the evolutionary process, is engaged in a search through biological configuration space, searching for and finding ever-increasing orders of biological complexity and diversity.
An Age of Search
Evolutionary search is not confined to biology but also takes place inside computers. The field of evolutionary computing (which includes genetic algorithms) falls broadly under that area of mathematics known as operations research, whose principal focus is mathematical optimization. Mathematical optimization is about finding solutions to problems where the solutions admit varying and measurable degrees of goodness (optimality). Evolutionary computing fits this mold, seeking items in a search space that achieve a certain level of fitness. These are the optimal solutions. (By the way, the irony of doing a Google "search" on the target phrase "evolutionary search," described in the previous paragraph, did not escape me. Google's entire business is predicated on performing optimal searches, where optimality is gauged in terms of the link structure of the web. We live in an age of search!)
If the possibilities connected with search now seem greater to you than they have in the past, extending beyond humans to computers and biology in general, they may still seem limited in that physics appears to know nothing of search. But is this true? The physical world is life-permitting -- its structure and laws allow (though they are far from necessitating) the existence of not just cellular life but also intelligent multicellular life. For the physical world to be life-permitting in this way, its laws and fundamental constants need to be configured in very precise ways. Moreover, it seems far from mandatory that those laws and constants had to take the precise form that they do. The universe itself, therefore, can be viewed as the solution to the problem of making life possible. But problem solving itself is a form of search, namely, finding the solution (among a range of candidates) to the problem.
Still, for many scientists, search fits uneasily in the natural sciences. Something unavoidably subjective and teleological seems involved in search. Search always involves a goal or objective, as well as criteria of success and failure (as judged by what or whom?) depending on whether and to what degree the objective has been met. Where does that objective, typically known as a target, come from other than from the minds of human inquirers? Are we, as pattern-seeking and pattern-inventing animals, simply imposing these targets/patterns on nature even though they have no independent, objective status?
This concern has merit, but it needs not to be overblown. If we don't presuppose a materialist metaphysics that makes mind, intelligence, and agency an emergent property of suitably organized matter, then it is an open question whether search and the teleology inherent in it are mere human constructions on the one hand, or, instead, realities embedded in nature on the other. What if nature is itself the product of mind and the patterns it exhibits reflect solutions to search problems formulated by such a mind?
Scientific inquiry that's free of prejudice and narrowly held metaphysical assumptions should, it seems, leave open both these possibilities. After all, the patterns we're talking about are not like finding a vague likeness of Santa Claus's beard in a cloud formation. Who, if they look hard enough, won't see Santa's beard? The fine-tuning of nature's laws and constants that permits life to exist at all is not like this. It is a remarkable pattern and may properly be regarded as the solution to a search problem as well as a fundamental feature of nature, or what philosophers would call a natural kind, and not merely a human construct. Whether an intelligence is responsible for the success of this search is a separate question. The standard materialist line in response to such cosmological fine-tuning is to invoke multiple universes and view the success of this search as a selection effect: most searches ended without a life-permitting universe, but we happened to get lucky and live in a universe hospitable to life.
In any case, it's possible to characterize search in a way that leaves the role of teleology and intelligence open without either presupposing them or deciding against them in advance. Mathematically speaking, search always occurs against a backdrop of possibilities (the search space), with the search being for a subset within this backdrop of possibilities (known as the target). Success and failure of search are then characterized in terms of a probability distribution over this backdrop of possibilities, the probability of success increasing to the degree that the probability of locating the target increases.
For example, consider all possible L-amino acid sequences joined by peptide bonds of length 100. This we can take as our reference class or backdrop of possibilities -- our search space. Within this class, consider those sequences that fold and thus might form a functioning protein. This, let us say, is the target. This target is not merely a human construct. Nature itself has identified this target as a precondition for life -- no living thing that we know can exist without proteins. Moreover, this target admits some probabilistic estimates. Beginning with the work of Robert Sauer, cassette mutagenesis and other experiments of this sort performed over the last three decades suggest that the target has probability no more than 1 in 10^60 (assuming a uniform probability distribution over all amino acid sequences in the reference class).
The mathematics characterizing search in this way is straightforward and general. Whether in specific situations a search so characterized also involves unavoidably subjective human elements or reflects objectively given realities embedded in nature can be argued independently of the mathematics. Such an argument speaks to the interpretation of the search, not to the search itself. Such an argument parallels controversies surrounding the interpretation of quantum mechanics: whether quantum mechanics is inherently a mind-based, observer-dependent theory; whether it can be developed independently of observers; whether it is properly construed as reflecting a deterministic, mind-independent, multiuniverse, etc. Quantum mechanics itself is a single, well-defined theory that admits several formulations, all of which are mathematically equivalent. Likewise, search as described here has a single, straightforward theoretical underpinning.
An Easter Egg Hunt, from the Scientific Vantage
One clarification is worth inserting here while we're still setting the stage for conservation of information. For most people, when it comes to search, the important thing is the outcome of the search. Take an Easter egg hunt. The children looking for Easter eggs are concerned with whether they find the eggs. From the scientific vantage, however, the important thing about search is not the particular outcomes but the probability distribution over the full range of possible outcomes in the search space (this parallels communication theory, in which what's of interest is not particular messages sent across a communication channel but the range of possible messages and their probability distribution). The problem with just looking at outcomes is that a search might get lucky and find the target even if the probabilities are against it.
Take an Easter egg hunt in which there's just one egg carefully hidden somewhere in a vast area. This is the target and blind search is highly unlikely to find it precisely because the search space is so vast. But there's still a positive probability of finding the egg even with blind search, and if the egg is discovered, then that's just how it is. It may be, because the egg's discovery is so improbable, that we might question whether the search was truly blind and therefore reject this (null) hypothesis. Maybe it was a guided search in which someone, with knowledge of the egg's whereabouts, told the seeker "warm, warmer, no colder, warmer, warmer, hot, hotter, you're burning up." Such guidance gives the seeker added information that, if the information is accurate, will help locate the egg with much higher probability than mere blind search -- this added information changes the probability distribution.
But again, the important issue, from a scientific vantage, is not how the search ended but the probability distribution under which the search was conducted. You don't have to be a scientist to appreciate this point. Suppose you've got a serious medical condition that requires treatment. Let's say there are two treatment options. Which option will you go with? Leaving cost and discomfort aside, you'll want the treatment with the better chance of success. This is the more effective treatment. Now, in particular circumstances, it may happen that the less effective treatment leads to a good outcome and the more effective treatment leads to a bad outcome. But that's after the fact. In deciding which treatment to take, you'll be a good scientist and go with the one that has the higher probability of success.
The Easter egg hunt example provides a little preview of conservation of information. Blind search, if the search space is too large and the number of Easter eggs is too small, is highly unlikely to successfully locate the eggs. A guided search, in which the seeker is given feedback about his search by being told when he's closer or farther from the egg, by contrast, promises to dramatically raise the probability of success of the search. The seeker is being given vital information bearing on the success of the search. But where did this information that gauges proximity of seeker to egg come from? Conservation of information claims that this information is itself as difficult to find as locating the egg by blind search, implying that the guided search is no better at finding the eggs than blind search once this information must be accounted for.
Conservation of Information in Evolutionary Biology
In the sequel, I will focus mainly on conservation of information as it applies to search in evolutionary biology (and by extension in evolutionary computing), trusting that once the case for conservation of information is made in biology, its scope and applicability for the rest of the natural sciences will be that much more readily accepted and acceptable. As it is, evolutionary biologists possessing the mathematical tools to understand search are typically happy to characterize evolution as a form of search. And even those with minimal knowledge of the relevant mathematics fall into this way of thinking.
Take Brown University's Kenneth Miller, a cell biologist whose knowledge of the relevant mathematics I don't know. Miller, in attempting to refute ID, regularly describes examples of experiments in which some biological structure is knocked out along with its function, and then, under selection pressure, a replacement structure is evolved that recovers the function. What makes these experiments significant for Miller is that they are readily replicable, which means that the same systems with the same knockouts will undergo the same recovery under the same suitable selection regime. In our characterization of search, we would say the search for structures that recover function in these knockout experiments achieves success with high probability.
Suppose, to be a bit more concrete, we imagine a bacterium capable of producing a particular enzyme that allows it to live off a given food source. Next, we disable that enzyme, not by removing it entirely but by, say, changing a DNA base in the coding region for this protein, thus changing an amino acid in the enzyme and thereby drastically lowering its catalytic activity in processing the food source. Granted, this example is a bit stylized, but it captures the type of experiment Miller regularly cites.
So, taking these modified bacteria, the experimenter now subjects them to a selection regime that starts them off on a food source for which they don't need the enzyme that's been disabled. But, over time, they get more and more of the food source for which the enzyme is required and less and less of other food sources for which they don't need it. Under such a selection regime, the bacterium must either evolve the capability of processing the food for which previously it needed the enzyme, presumably by mutating the damaged DNA that originally coded for the enzyme and thereby recovering the enzyme, or starve and die.
So where's the problem for evolution in all this? Granted, the selection regime here is a case of artificial selection -- the experimenter is carefully controlling the bacterial environment, deciding which bacteria get to live or die. But nature seems quite capable of doing something similar. Nylon, for instance, is a synthetic product invented by humans in 1935, and thus was absent from bacteria for most of their history. And yet, bacteria have evolved the ability to digest nylon by developing the enzyme nylonase. Yes, these bacteria are gaining new information, but they are gaining it from their environments, environments that, presumably, need not be subject to intelligent guidance. No experimenter, applying artificial selection, for instance, set out to produce nylonase.
To see that there remains a problem for evolution in all this, we need to look more closely at the connection between search and information and how these concepts figure into a precise formulation of conservation of information. Once we have done this, we'll return to the Miller-type examples of evolution to see why evolutionary processes do not, and indeed cannot, create the information needed by biological systems. Most biological configuration spaces are so large and the targets they present are so small that blind search (which ultimately, on materialist principles, reduces to the jostling of life's molecular constituents through forces of attraction and repulsion) is highly unlikely to succeed. As a consequence, some alternative search is required if the target is to stand a reasonable chance of being located. Evolutionary processes driven by natural selection constitute such an alternative search. Yes, they do a much better job than blind search. But at a cost -- an informational cost, a cost these processes have to pay but which they are incapable of earning on their own.
In the information-theory literature, information is usually characterized as the negative logarithm to the base two of a probability (or some logarithmic average of probabilities, often referred to as entropy). This has the effect of transforming probabilities into bits and of allowing them to be added (like money) rather than multiplied (like probabilities). Thus, a probability of one-eighths, which corresponds to tossing three heads in a row with a fair coin, corresponds to three bits, which is the negative logarithm to the base two of one-eighths. Such a logarithmic transformation of probabilities is useful in communication theory, where what gets moved across communication channels is bits rather than probabilities and the drain on bandwidth is determined additively in terms of number of bits. Yet, for the purposes of this "Made Simple" paper, we can characterize information, as it relates to search, solely in terms of probabilities, also cashing out conservation of information purely probabilistically.
Probabilities, treated as information used to facilitate search, can be thought of in financial terms as a cost -- an information cost. Think of it this way. Suppose there's some event you want to have happen. If it's certain to happen (i.e., has probability 1), then you own that event -- it costs you nothing to make it happen. But suppose instead its probability of occurring is less than 1, let's say some probability p. This probability then measures a cost to you of making the event happen. The more improbable the event (i.e., the smaller p), the greater the cost. Sometimes you can't increase the probability of making the event occur all the way to 1, which would make it certain. Instead, you may have to settle for increasing the probability to qwhere qis less than 1 but greater than p. That increase, however, must also be paid for. And in fact, we do pay to raise probabilities all the time. For instance, many students pay tuition costs to obtain a degree that will improve their prospects (i.e., probabilities) of landing a good, high-paying job.
A Fair Lottery
To illustrate this point more precisely, imagine that you are playing a lottery. Let's say it's fair, so that the government doesn't skim anything off the top (i.e., everything paid into the lottery gets paid out to the winner) and one ticket is sure to be the winner. Let's say a million lottery tickets have been purchased so far at one dollar apiece, exactly one of which is yours. Each lottery ticket therefore has the same probability of winning, so your lottery ticket has a one in a million chance of coming out on top (which is your present p value), entailing a loss of one dollar if you lose and nearly a million dollars if you win ($999,999 to be exact). Now let's say you really want to win this lottery -- for whatever reason you earnestly desire to hold the winning ticket in your hand. In that case, you can purchase additional tickets. By purchasing these, you increase your chance of winning the lottery. Let's say you purchase an additional million tickets at one dollar apiece. Doing so has now boosted your probability of winning the lottery from .000001 to .500001, or to about one-half.
Increasing the probability of winning the lottery has therefore incurred a cost. With a probability of roughly .5 of winning the lottery, you are now much more likely to gain approximately one million dollars. But it also cost you a million dollars to increase your probability of winning. As a result, your expected winnings, computed in standard statistical terms as the probability of losing multiplied by what you would lose subtracted from the probability of winning multiplied by what you would win, equals zero. Moreover, because this is a fair lottery, it equals zero when you only had one ticket purchased and it equals zero when you had an additional million tickets purchased. Thus, in statistical terms, investing more in this lottery has gained you nothing.
Conservation of information is like this. Not exactly like this because conservation of information focuses on search whereas the previous example focused on the economics of expected utility. But just as increasing your chances of winning a lottery by buying more tickets offers no real gain (it is not a long-term strategy for increasing the money in your pocket), so conservation of information says that increasing the probability of successful search requires additional informational resources that, once the cost of locating them is factored in, do nothing to make the original search easier.
To see how this works, let's consider a toy problem. Imagine that your search space consists of only six items, labeled 1 through 6. Let's say your target is item 6 and that you're going to search this space by rolling a fair die once. If it lands on 6, your search is successful; otherwise, it's unsuccessful. So your probability of success is 1/6. Now let's say you want to increase the probability of success to 1/2. You therefore find a machine that flips a fair coin and delivers item 6 to you if it lands heads and delivers some other item in the search space if it land tails. What a great machine, you think. It significantly boosts the probability of obtaining item 6 (from 1/6 to 1/2).
But then a troubling question crosses your mind: Where did this machine that raises your probability of success come from? A machine that tosses a fair coin and that delivers item 6 if the coin lands heads and some other item in the search space if it lands tails is easily reconfigured. It can just as easily deliver item 5 if it lands heads and some other item if it lands tails. Likewise for all the remaining items in the search space: a machine such as the one described can privilege any one of the six items in the search space, delivering it with probability 1/2 at the expense of the others. So how did you get the machine that privileges item 6? Well, you had to search among all those machines that flip coins and with probability 1/2 deliver a given item, selecting the one that delivers item 6 when it lands heads. And what's the probability of finding such a machine?
To keep things simple, let's imagine that our machine delivers item 6 with probability 1/2 and each of items 1 through 5 with equal probability, that is, with probability 1/10. Accordingly, this machine is one of six possible machines configured in essentially the same way. There's another machine that flips a coin, delivers item 1 from the original search space if it lands heads, and delivers any one of 2 through 6 with probability 1/10 each if the coin lands tails. And so on. Thus, of these six machines, one delivers item 6 with probability 1/2 and the remaining five machines deliver item 6 with probability 1/10. Since there are six machines, only one of which delivers item 6 (our target) with high probability, and since only labels and no intrinsic property distinguishes one machine from any other in this setup (the machines are, as mathematicians would say, isomorphic), the principle of indifference applies to these machines and prescribes that the probability of getting the machine that delivers item 6 with probability 1/2 is the same as that of getting any other machine, and is therefore 1/6.
But a probability of 1/6 to find a machine that delivers item 6 with probability 1/2 is no better than our original probability of 1/6 of finding the target simply by tossing a die. In fact, once we have this machine, we still have only a 50-50 chance of locating item 6. Finding this machine incurs a probability cost of 1/6, and once this cost is incurred we still have a probability cost of 1/2 of finding item 6. Since probability costs increase as probabilities decrease, we're actually worse off than we were at the start, where we simply had to roll a die that, with probability 1/6, locates item 6.
The probability of finding item 6 using this machine, once we factor in the probabilistic cost of securing the machine, therefore ends up being 1/6 x 1/2 = 1/12. So our attempt to increase the probability of finding item 6 by locating a more effective search for that item has actually backfired, making it in the end even more improbable that we'll find item 6. Conservation of information says that this is always a danger when we try to increase the probability of success of a search -- that the search, instead of becoming easier, remains as difficult as before or may even, as in this example, become more difficult once additional underlying information costs, associated with improving the search and often hidden, as in this case by finding a suitable machine, are factored in.
Why It Is Called "Conservation" of Information
The reason it's called "conservation" of information is that the best we can do is break even, rendering the search no more difficult than before. In that case, information is actually conserved. Yet often, as in this example, we may actually do worse by trying to improve the probability of a successful search. Thus, we may introduce an alternative search that seems to improve on the original search but that, once the costs of obtaining this search are themselves factored in, in fact exacerbate the original search problem.
In referring to ease and difficulty of search, I'm not being mathematically imprecise. Ease and difficulty, characterized mathematically, are always complexity-theoretic notions presupposing an underlying complexity measure. In this case, complexity is cashed out probabilistically, so the complexity measure is a probability measure, with searches becoming easier to the degree that successfully locating targets is more probable, and searches becoming more difficult to the degree that successfully locating targets is more improbable. Accordingly, it also makes sense to talk about the cost of a search, with the cost going up the more difficult the search, and the cost going down the easier the search.
In all these discussions of conservation of information, there's always a more difficult search that gets displaced by an easier search, but once the difficulty of finding the easier search (difficulty being understood probabilistically) is factored in, there's no gain, and in fact the total cost may have gone up. In other words, the actual probability of locating the target with the easier search is no greater, and may actually be less, than the probability of locating the target with the more difficult search once the probability of locating the easier search is factored in. All of this admits a precise mathematical formulation. Inherent in such a formulation is treating search itself as subject to search. If this sounds self-referential, it is. But it also makes good sense.
To see this, consider a treasure hunt. Imagine searching for a treasure chest buried on a large island. We consider two searches, a more difficult one and an easier one. The more difficult search, in this case, is a blind search in which, without any knowledge of where the treasure is buried, you randomly meander about the island, digging here or there for the treasure. The easier search, by contrast, is to have a treasure map in which "x marks the spot" where the treasure is located, and where you simply follow the map to the treasure.
But where did you get that treasure map? Mapmakers have made lots of maps of that island, and for every map that accurately marks the treasure's location, there are many many others that incorrectly mark its location. Indeed, for any place on the island, there's a map that marks it with an "x." So how do you find your way among all these maps to one that correctly marks the treasure's location? Evidently, the search for the treasure has been displaced to a search for a map that locates the treasure. Each map corresponds to a search, and locating the right map corresponds to a search for a search (abbreviated, in the conservation of information literature, as S4S).
Conservation of information, in this example, says that the probability of locating the treasure by first searching for a treasure map that accurately identifies the treasure's location is no greater, and may be less, than the probability of locating the treasure simply by blind search. This implies that the easier search (i.e., the search with treasure map in hand), once the cost of finding it is factored in, has not made the actual overall search any easier. In general, conservation of information says that when a more difficult search gets displaced by an easier search, the probability of finding the target by first finding the easier search and then using the easier search to find the target is no greater, and often is less, than the probability of finding the target directly with the more difficult search.
In the Spirit of "No Free Lunch"
Anybody familiar with the No Free Lunch (NFL) theorems will immediately see that conservation of information is very much in the same spirit. The upshot of the NFL theorems is that no evolutionary search outperforms blind search once the information inherent in fitness (i.e., the fitness landscape) is factored out. NFL is a great equalizer. It says that all searches are essentially equivalent to blind search when looked at not from the vantage of finding a particular target but when averaged across the different possible targets that might be searched.
If NFL tends toward egalitarianism by arguing that no search is, in itself, better than blind search when the target is left unspecified, conservation of information tends toward elitism by making as its starting point that some searches are indeed better than others (especially blind search) at locating particular targets. Yet, conservation of information quickly adds that the elite status of such searches is not due to any inherent merit of the search (in line with NFL) but to information that the search is employing to boost its performance.
Some searches do better, indeed much better, than blind search, and when they do, it is because they are making use of target-specific information. Conservation of information calculates the information cost of this performance increase and shows how it must be counterbalanced by a loss in search performance elsewhere (specifically, by needing to search for the information that boosts search performance) so that global performance in locating the target is not improved and may in fact diminish.
Conservation of information, in focusing on search for the information needed to boost search performance, suggests a relational ontology between search and objects being searched. In a relational ontology, things are real not as isolated entities but in virtue of their relation to other things. In the relational ontology between search and the objects being searched, each finds its existence in the other. Our natural tendency is to think of objects as real and search for those objects as less real in the sense that search depends on the objects being searched but objects can exist independently of search. Yet objects never come to us in themselves but as patterned reflections of our background knowledge, and thus as a target of search.
Any scene, indeed any input to our senses, reaches our consciousness only by aspects becoming salient, and this happens because certain patterns in our background knowledge are matched to the exclusion of others. In an extension of George Berkeley's "to be is to be perceived," conservation of information suggests that "to be perceived is to be an object of search." By transitivity of reasoning, it would then follow that to be is to be an object of search. And since search is always search for an object, search and the object of search become, in this way of thinking, mutually ontologizing, giving existence to each other. Conservation of information then adds to this by saying that search can itself be an object of search.
Most relational ontologies are formulated in terms of causal accessibility, so that what renders one thing real is its causal accessibility to another thing. But since search is properly understood probabilistically, the form of accessibility relevant to a relational ontology grounded in search is probabilistic. Probabilistic rather than causal accessibility grounds the relational ontology of search. Think of a needle in a haystack, only imagine the needle is the size of an electron and the haystack is the size of the known physical universe. Searches with such a small probability of success via blind or random search are common in biology. Biological configuration spaces of possible genes and proteins, for instance, are immense, and finding a functional gene or protein in such spaces via blind search can be vastly more improbable than finding an arbitrary electron in the known physical universe.
Why the Multiverse Is Incoherent
Given needles this tiny in haystacks this large, blind search is effectively incapable of finding a needle in a haystack. Success, instead, requires a search that vastly increases the probability of finding the needle. But where does such a search come from? And in what sense does the needle exist apart from such a search. Without a search that renders finding the needle probable, the needle might just as well not exist. And indeed, we would in all probability not know that it exists except for a search that renders it probable. This, by the way, is why I regard the multiverse as incoherent: what renders the known physical universe knowable is that it is searchable. The multiverse, by contrast, is unsearchable. In a relational ontology that makes search as real as the objects searched, the multiverse is unreal.
These considerations are highly germane to evolutionary biology, which treats evolutionary search as a given, as something that does not call for explanation beyond the blind forces of nature. But insofar as evolutionary search renders aspects of a biological configuration space probabilistically accessible where previously, under blind search, they were probabilistically inaccessible, conservation of information says that evolutionary search achieves this increase in search performance at an informational cost. Accordingly, the evolutionary search, which improves on blind search, had to be found through a higher-order search (i.e., a search for a search, abbreviated S4S), which, when taken into account, does not make the evolutionary search any more effective at finding the target than the original blind search.
Given this background discussion and motivation, we are now in a position to give a reasonably precise formulation of conservation of information, namely: raising the probability of success of a search does nothing to make attaining the target easier, and may in fact make it more difficult, once the informational costs involved in raising the probability of success are taken into account. Search is costly, and the cost must be paid in terms of information. Searches achieve success not by creating information but by taking advantage of existing information. The information that leads to successful search admits no bargains, only apparent bargains that must be paid in full elsewhere.
For a "Made Simple" paper on conservation of information, this is about as much as I want to say regarding a precise statement of conservation of information. Bob Marks and I have proved several technical conservation of information theorems (see the publications page at www.evoinfo.org). Each of these looks at some particular mathematical model of search and shows how raising the probability of success of a search by a factor of q/p (> 1) incurs an information cost not less than log(q/p), or, equivalently, a probability cost of not more than p/q. If we therefore start with a search having probability of success p and then raise it to q, the actual probability of finding the target is not q but instead is less than or equal to q multiplied by p/q, or, therefore, less than or equal to p, which is just the original search difficulty. Accordingly, raising the probability of success of a search contributes nothing toward finding the target once the information cost of raising the probability is taken into account.
Conservation of information, however, is not just a theorem or family of theorems but also a general principle or law (recall Medawar's "Law of Conservation of Information"). Once enough such theorems have been proved and once their applicability to a wide range of search problems has been repeatedly demonstrated (the Evolutionary Informatics Lab has, for instance, shown how such widely touted evolutionary algorithms as AVIDA, ev, Tierra, and Dawkins's WEASEL all fail to create but instead merely redistribute information), conservation of information comes to be seen not as a narrow, isolated result but as a fundamental principle or law applicable to search in general. This is how we take conservation of information.
Instead of elaborating the underlying theoretical apparatus for conservation of information, which is solid and has appeared now in a number of peer-reviewed articles in the engineering and mathematics literature (see the publications page at www.evoinfo.org -- it's worth noting that none of the critiques of this work has appeared in the peer-reviewed scientific/engineering literature, although a few have appeared in the philosophy of science literature, such as Philosophy and Biology; most of the critiques are Internet diatribes), I want next to illustrate conservation of information as it applies to one of the key examples touted by evolutionists as demonstrating the information-generating powers of evolutionary processes. Once I've done that, I want to consider what light conservation of information casts on evolution generally.
An Economist Is Stranded on an Island
To set the stage, consider an old joke about an economist and several other scientists who are stranded on an island and discover a can of beans. Hungry, they want to open it. Each looks to his area of expertise to open the can. The physicist calculates the trajectory of a projectile that would open the can. The chemist calculates the heat from a fire needed to burst the can. And so on. Each comes up with a concrete way to open the can given the resources on the island. Except the economist. The economist's method of opening the can is the joke's punch line: suppose a can opener. There is, of course, no can opener on the island.
The joke implies that economists are notorious for making assumptions to which they are unentitled. I don't know enough about economists to know whether this is true, but I do know that this is the case for many evolutionary biologists. The humor in the economist's proposed solution of merely positing a can opener, besides its jab at the field of economics, is the bizarre image of a can opener coming to the rescue of starving castaways without any warrant whatsoever for its existence. The economist would simply have the can opener magically materialize. The can opener is, essentially, a deus ex machina.
Interestingly, the field of evolutionary biology is filled with deus ex machinas (yes, I've taken Latin and know that this is not the proper plural of deus ex machina, which is dei ex machinis; but this is a "made simple" paper meant for the unwashed masses, of which I'm a card-carrying member). Only the evolutionary biologist is a bit more devious about employing, or should I say deploying, deus ex machinas than the economist. Imagine our economist counseling someone who's having difficulty repaying a juice loan to organized crime. In line with the advice he gave on the island, our economist friend might give the following counsel: suppose $10,000 in cash.
$10,000 might indeed pay the juice loan, but that supposition seems a bit crude. An evolutionary biologist, to make his advice appear more plausible, would add a layer of complexity to it: suppose a key to a safety deposit box with $10,000 cash inside it. Such a key is just as much a deus ex machina as the $10,000 in cash. But evolutionary biology has long since gained mastery in deploying such devices as well as gaining the right to call their deployment "science."
I wish I were merely being facetious, but there's more truth here than meets the eye. Consider Richard Dawkins' well known METHINKS IT IS LIKE A WEASEL example (from his 1986 book The Blind Watchmaker), an example endlessly repeated and elaborated by biologists trying to make evolution seem plausible, the most notable recent rendition being by RNA-worlds researcher Michael Yarus in his 2010 book Life from an RNA World (Yarus's target phrase, unlike Dawkins's, which is drawn from Shakespeare's Hamlet, is Theodosius Dozhansky's famous dictum NOTHING IN BIOLOGY MAKES SENSE EXCEPT IN THE LIGHT OF EVOLUTION).
A historian or literature person, confronted with METHINKS IT IS LIKE A WEASEL, would be within his rights to say, suppose that there was a writer named William Shakespeare who wrote it. And since the person and work of Shakespeare have been controverted (was he really a she? did he exist at all? etc.), this supposition is not without content and merit. Indeed, historians and literature people make such suppositions all the time, and doing so is part of what they get paid for. Are the Homeric poems the result principally of a single poet, Homer, or an elaboration by a tradition of bards? Did Moses write the Pentateuch or is it the composite of several textual traditions, as in the documentary hypothesis? Did Jesus really exist? (Dawkins and his fellow atheists seriously question whether Jesus was an actual figure of history; cf. the film The God Who Wasn't There).
For the target phrase METHINKS IT IS LIKE A WEASEL, Dawkins bypasses the Shakespeare hypothesis -- that would be too obvious and too intelligent-design friendly. Instead of positing Shakespeare, who would be an intelligence or designer responsible for the text in question (designers are a no-go in conventional evolutionary theory), Dawkins asks his readers to suppose an evolutionary algorithm that evolves the target phrase. But such an evolutionary algorithm privileges the target phrase by adapting the fitness landscape so that it assigns greater fitness to phrases that have more corresponding letters in common with the target.
And where did that fitness landscape come from? Such a landscape potentially exists for any phrase whatsoever, and not just for METHINKS IT IS LIKE A WEASEL. Dawkins's evolutionary algorithm could therefore have evolved in any direction, and the only reason it evolved to METHINKS IT IS LIKE A WEASEL is that he carefully selected the fitness landscape to give the desired result. Dawkins therefore got rid of Shakespeare as the author of METHINKS IT IS LIKE A WEASEL, only to reintroduce him as the (co)author of the fitness landscape that facilitates the evolution of METHINKS IT IS LIKE A WEASEL.
The bogusness of this example, with its sleight-of-hand misdirection, has been discussed ad nauseam by me and my colleagues in the ID community. We've spent so much time and ink on this example not because of its intrinsic merit, but because the evolutionary community itself remains so wedded to it and endlessly repeats its underlying fallacy in ever increasingly convoluted guises (AVIDA, Tierra, ev, etc.). For a careful deconstruction of Dawkins's WEASEL, providing a precise simulation under user control, see the "Weasel Ware" project on the Evolutionary Informatics website: www.evoinfo.org/weasel.
How does conservation of information apply to this example? Straightforwardly. Obtaining METHINKS IT IS LIKE A WEASEL by blind search (e.g., by randomly throwing down Scrabble pieces in a line) is extremely improbable. So Dawkins proposes an evolutionary algorithm, his WEASEL program, to obtain this sequence with higher probability. Yes, this algorithm does a much better job, with much higher probability, of locating the target. But at what cost? At an even greater improbability cost than merely locating the target sequence by blind search.
Dawkins completely sidesteps this question of information cost. Foreswearing any critical examination of the origin of the information that makes his simulation work, he attempts instead, by rhetorical tricks, simply to induce in his readers a stupefied wonder at the power of evolution: "Gee, isn't it amazing how powerful evolutionary processes are given that they can produce sentences like METHINKS IT IS LIKE A WEASEL, which ordinarily require human intelligence." But Dawkins is doing nothing more than advise our hapless borrower with the juice loan to suppose a key to a safety deposit box with the money needed to pay it off. Whence the key? Likewise, whence the fitness landscape that rendered the evolution of METHINKS IT IS LIKE A WEASEL probable? In terms of conservation of information, the necessary information was not internally created but merely smuggled in, in this case, by Dawkins himself.
An Email Exchange with Richard Dawkins
Over a decade ago, I corresponded with Dawkins about his WEASEL computer simulation. In an email to me dated May 5, 2000, he responded to my criticism of the teleology hidden in that simulation. Note that he does not respond to the challenge of conservation of information directly, nor had I developed this idea with sufficient clarity at the time to use it in refutation. More on this shortly. Here's what he wrote, exactly as he wrote it:
The point about any phrase being equally eligible to be a target is covered on page 7 [of The Blind Watchmaker]: "Any old jumbled collection of parts is unique and, WITH HINDSIGHT, is as improbable as any other . . ." et seq.
More specifically, the point you make about the Weasel, is admitted, without fuss, on page 50: "Although the monkey/Shakespeare model is useful for explaining the distinction between single-step selection and cumulative selection, it is misleading in important ways. One of these is that, in each generation of selective 'breeding', the mutant 'progeny' phrases were judged according to the criterion of resemblance to a DISTANT IDEAL target ... Life isn't like that."
In real life of course, the criterion for optimisation is not an arbitrarily chosen distant target but SURVIVAL. It's as simple as that. This is non-arbitrary. See bottom of page 8 to top of page 9. And it's also a smooth gradient, not a sudden leap from a flat plain in the phase space. Or rather it must be a smooth gradient in all those cases where evolution has actually happened. Maybe there are theoretical optima which cannot be reached because the climb is too precipitous.
The Weasel model, like any model, was supposed to make one point only, not be a complete replica of the real thing. I invented it purely and simply to counter creationists who had naively assumed that the phase space was totally flat except for one vertical peak (what I later represented as the precipitous cliff of Mount Improbable). The Weasel model is good for refuting this point, but it is misleading if it is taken to be a complete model of Darwinism. That is exactly why I put in the bit on page 50.
Perhaps you should look at the work of Spiegelman and others on evolution of RNA molecules in an RNA replicase environment. They have found that, repeatedly, if you 'seed' such a solution with an RNA molecule, it will converge on a particular size and form of 'optimal' replicator, sometimes called Spiegelman's minivariant. Maynard Smith gives a good brief account of it in his The Problems of Biology (see Spiegelman in the index). Orgel extended the work, showing that different chemical environments select for different RNA molecules.
The theory is so beautiful, so powerful. Why are you people so wilfully blind to its simple elegance? Why do you hanker after "design" when surely you must see that it doesn't explain anything? Now THAT's what I call a regress. You are a fine one to talk about IMPORTING complexity. "Design" is the biggest import one could possibly imagine.
Dawkins's email raises a number of interesting questions that, in the years since, have received extensive discussion among the various parties debating intelligent design. The who-designed-the-designer regress, whether a designing intelligence must itself be complex in the same way that biological systems are complex, the conditions under which evolution is complexity-increasing vs. complexity-decreasing, the evolutionary significance of Spiegelman's minivariants, and how the geometry of the fitness landscape facilitates or undercuts evolution have all been treated at length in the design literature and won't be rehearsed here (for more on these questions, see my books No Free Lunch and The Design Revolution as well as Michael Behe's The Edge of Evolution).
"Just One Word: Plastics"
Where I want to focus is Dawkins's one-word answer to the charge that his WEASEL simulation incorporates an unwarranted teleology -- unwarranted by the Darwinian understanding of evolution for which his Blind Watchmaker is an apologetic. The key line in the above quote is, "In real life of course, the criterion for optimisation is not an arbitrarily chosen distant target but SURVIVAL." Survival is certainly a necessary condition for life to evolve. If you're not surviving, you're dead, and if you're dead, you're not evolving -- period. But to call "survival," writ large, a criterion for optimization is ludicrous. As I read this, I have images of Dustin Hoffman in The Graduate being taken aside at a party by an executive who is about to reveal the secret of success: PLASTICS (you can watch the clip by clicking here). For the greatest one-word simplistic answers ever given, Dawkins's ranks right up there.
But perhaps I'm reading Dawkins uncharitably. Presumably, what he really means is differential survival and reproduction as governed by natural selection and random variation. Okay, I'm willing to buy that this is what he means. But even on this more charitable reading, his characterization of evolution is misleading and wrong. Ken Miller elaborates on this more charitable reading in his recent book Only a Theory. There he asks what's needed to drive the increase in biological information over the course of evolution. His answer? "Just three things: selection, replication, and mutation... Where the information 'comes from' is, in fact, from the selective process itself."
It's easy to see that Miller is blowing smoke even without the benefits of modern information theory. All that's required is to understand some straightforward logic, uncovered in Darwin's day, about the nature of scientific explanation in teasing apart possible causes. Indeed, biology's reception of Darwinism might have been far less favorable had scientists paid better attention to Darwin's contemporary John Stuart Mill. In 1843, sixteen years before the publication of Darwin's Origin of Species, Mill published the first edition of his System of Logic (which by the 1880s had gone through eight editions). In that work Mill lays out various methods of induction. The one that interests us here is his method of difference. In his System of Logic, Mill described this method as follows:
If an instance in which the phenomenon under investigation occurs, and an instance in which it does not occur, have every circumstance in common save one, that one occurring only in the former; the circumstance in which alone the two instances differ is the effect, or the cause, or an indispensable part of the cause, of the phenomenon.
Essentially, this method says that to discover which of a set of circumstances is responsible for an observed difference in outcomes requires finding a difference in the circumstances. An immediate corollary is that common circumstances cannot explain a difference in outcomes. Thus, if one person is sober and another drunk, and if both ate chips, salsa, and popcorn, this fact, common to both, does not, and indeed cannot, explain the difference. Rather, the difference is explained by one abstaining from alcohol and the other drinking too much. Mill's method of difference, so widely used in everyday life as well as in science, is crucially relevant to evolutionary biology. In fact, it helps bring some sense of proportion and reality to the inflated claims so frequently made on behalf of Darwinian processes.
Case in point: Miller's overselling of Darwinian evolution by claiming that "what's needed to drive" increases in biological information is "just three things: selection, replication, and mutation." Mill's method of difference gives the lie to Miller's claim. It's easy to write computer simulations that feature selection, replication, and mutation (or SURVIVAL writ large, or differential survival and reproduction, or any such reduction of evolution to Darwinian principles) -- and that go absolutely nowhere. Taken together, selection, replication, and mutation are not a magic bullet, and need not solve any interesting problems or produce any salient patterns. That said, evolutionary computation does get successfully employed in the field of optimization, so it is possible to write computer simulations that feature selection, replication, and mutation and that do go somewhere, solving interesting problems or producing salient patterns. But precisely because selection, replication, and mutation are common to all such simulations, they cannot, as Mill's method underscores, account for the difference.
One Boeing engineer used to call himself a "penalty-function artist." A penalty function is just another term for fitness landscape (though the numbers are reversed -- the higher the penalty, the lower the fitness). Coming up with the right penalty functions enabled this person to solve his engineering problems. Most such penalty functions, however, are completely useless. Moreover, all such functions operate within the context of an evolutionary computing environment that features Miller's triad of selection, replication, and mutation. So what makes the difference? It's that the engineer, with knowledge of the problem he's trying to solve, carefully adapts the penalty function to the problem and thereby raises the probability of successfully finding a solution. He's not just choosing his penalty functions willy-nilly. If he did, he wouldn't be working at Boeing. He's an artist, and his artistry (intelligent design) consists in being able to find the penalty functions that solve his problems.
I've corresponded with both Miller and Dawkins since 2000. Miller and I have sparred on a number of occasions in public debate (as recently as June 2012, click here). Dawkins refuses all such encounters. Regardless, we are familiar with each other's work, and yet I've never been able to get from either of them a simple admission that the logic in Mill's method of difference is valid and that it applies to evolutionary theory, leaving biology's information problem unresolved even after the Darwinian axioms of selection, replication, and variation are invoked.
John Stuart Mill's Inconvenient Truth
Instead, Miller remains an orthodox Darwinist, and Dawkins goes even further, embracing a universal Darwinism that sees Darwinian evolution as the only conceivable scientific explanation of life's diversification in natural history. As he wrote in The Blind Watchmaker and continues to believe:
My argument will be that Darwinism is the only known theory that is in principle capable of explaining certain aspects of life. If I am right it means that, even if there were no actual evidence in favor of the Darwinian theory (there is, of course) we should still be justified in preferring it over all rival theories.
Mill's method of difference is an inconvenient truth for Dawkins and Miller, but it's a truth that must be faced. For his willingness to face this truth, I respect Stuart Kauffman infinitely more than either Miller or Dawkins. Miller and Dawkins are avid Darwinists committed to keeping the world safe for their patron saint. Kauffman is a free spirit, willing to admit problems where they arise. Kauffman at least sees that there is a problem in claiming that the Darwinian mechanism can generate biological information, even if his own self-organizational approach is far from resolving it. As Kauffman writes in Investigations:
If mutation, recombination, and selection only work well on certain kinds of fitness landscapes, yet most organisms are sexual, and hence use recombination, and all organisms use mutation as a search mechanism, where did these well-wrought fitness landscapes come from, such that evolution manages to produce the fancy stuff around us?
According to Kauffman, "No one knows."
Kauffman's observation here is entirely in keeping with conservation of information. Indeed, he offers this observation in the context of discussing the No Free Lunch theorems, of which conservation of information is a logical extension. The fitness landscape supplies the evolutionary process with information. Only finely tuned fitness landscapes that are sufficiently smooth, don't isolate local optima, and, above all, reward ever-increasing complexity in biological structure and function are suitable for driving a full-fledged evolutionary process. So where do such fitness landscapes come from? Absent an extrinsic intelligence, the only answer would seem to be the environment.
Just as I have heard SURVIVAL as a one-word resolution to the problem of generating biological information, so also have I heard ENVIRONMENT. Ernan McMullin, for instance, made this very point to me over dinner at the University of Chicago in 1999, intoning this word ("environment") as though it were the solution to all that ails evolution. Okay, so the environment supplies the information needed to drive biological evolution. But where did the environment get that information? From itself? The problem with such an answer is this: conservation of information entails that, without added information, biology's information problem remains constant (breaks even) or intensifies (gets worse) the further back in time we trace it.
The whole magic of evolution is that it's supposed to explain subsequent complexity in terms of prior simplicity, but conservation of information says that there never was a prior state of primordial simplicity -- the information, absent external input, had to be there from the start. It is no feat of evolutionary theorizing to explain how cavefish lost the use of their eyes after long periods of being deprived of light. Functioning eyes turning into functionless eye nubs is a devolution from complexity to simplicity. As a case of use-it-or-lose-it, it does not call for explanation. Evolution wins plaudits for purporting to explain how things like eyes that see can evolve in the first place from prior simpler structures that cannot see.
If the evolutionary process could indeed create such biological information, then evolution from simplicity to complexity would be unproblematic. But the evolutionary process as conceived by Darwin and promulgated by his successors is non-teleological. Accordingly, it cannot employ the activity of intelligence in any guise to increase biological information. But without intelligent input, conservation of information implies that as we regress biological information back in time, the amount of information to be accounted for never diminishes and may actually increase.
Explaining Walmart's Success by Invoking Interstate Highways
Given conservation of information and the absence of intelligent input, biological information with the complexity we see now must have always been present in the universe in some form or fashion, going back even as far as the Big Bang. But where in the Big Bang, with a heat and density that rule out any life form in the early history of the universe, is the information for life's subsequent emergence and development on planet Earth? Conservation of information says this information has to be there, in embryonic form, at the Big Bang and at every moment thereafter. So where is it? How is it represented? In the environment, you say? Invoking the environment as evolution's information source is empty talk, on the order of invoking the interstate highway system as the reason for Walmart's business success. There is some connection, to be sure, but neither provides real insight or explanation.
To see more clearly what's at stake here, imagine Scrabble pieces arranged in sequence to spell out meaningful sentences (such as METHINKS IT IS LIKE A WEASEL). Suppose a machine with suitable sensors, movable arms, and grips, takes the Scrabble pieces out of a box and arranges them in this way. To say that the environment has arranged the Scrabble pieces to spell out meaningful sentences is, in this case, hardly illuminating. Yes, broadly speaking, the environment is arranging the pieces into meaningful sentences. But, more precisely, a robotic machine, presumably running a program with meaningful sentences suitably coded, is doing the arranging.
Merely invoking the environment, without further amplification, therefore explains nothing about the arrangement of Scrabble pieces into meaningful sentences. What exactly is it about the environment that accounts for the information conveyed in those arrangements of Scrabble pieces? And what about the environment accounts for the information conveyed in the organization of biological systems? That's the question that needs to be answered. Without an answer to this question, appeals to the environment are empty and merely cloak our ignorance of the true sources of biological information.
With a machine that arranges Scrabble pieces, we can try to get inside it and see what it does ("Oh, there's the code that spells out METHINKS IT IS LIKE A WEASEL"). With the actual environment for biological evolution, we can't, as it were, get under the hood of the car. We see natural forces such as wind, waves, erosion, lightning, Brownian motion, attraction, repulsion, bonding affinities and the like. And we see slippery slopes on which one organism thrives and another founders. If such an environment were arranging Scrabble pieces in sequence, we would observe the pieces blown by wind or jostled by waves or levitated by magnets. And if, at the end of the day, we found Scrabble pieces spelling out coherent English sentences, such as METHINKS IT IS LIKE A WEASEL, we would be in our rights to infer that an intelligence had in some way co-opted the environment and inserted information, even though we have no clue how.
Such a role for the environment, as an inscrutable purveyor of information, is, however, unacceptable to mainstream evolutionary theorists. In their view, the way the environment inputs information into biological systems over the course of evolution is eminently scrutable. It happens, so they say, by a gradual accumulation of information as natural selection locks in on small advantages, each of which can arise by chance without intelligent input. But what's the evidence here?
This brings us back to the knock-out experiments that Ken Miller has repeatedly put forward to refute intelligent design, in which a structure responsible for a function has been disabled and then, through selection pressure, it, or something close to it capable of the lost function, gets recovered. In all his examples, there is no extensive multi-step sequence of structural changes each of which lead to a distinct functional advantage. Usually, it's just a single nucleotide base or amino acid change that's needed to recover function.
This is true even with the evolution of nylonase, mentioned earlier. Nylonase is not the result of an entirely new DNA sequence coding for that enzyme. Rather, it resulted from a frameshift in existing DNA, shifting over some genetic letters and thus producing the gene for nylonase. The origin of nylonase is thus akin to changing the meaning of "therapist" by inserting a space and getting "the rapist." For the details about the evolution of nylonase, see a piece I did in response to Miller at Uncommon Descent (click here).
The Two-Pronged Challenge of Intelligent Design
Intelligent design has always mounted a two-pronged challenge to conventional evolutionary theory. On the one hand, design proponents have challenged common ancestry. Discontinuities in the fossil record and in supposed molecular phylogenies have, for many of us (Michael Behe has tended to be the exception), made common ancestry seem far from compelling. Our reluctance here is not an allergic reaction but simply a question of evidence -- many of us in the ID community see the evidence for common ancestry as weak, especially when one leaves the lower taxonomic groupings and moves to the level of orders, classes, and, above all, phyla (as with the Cambrian explosion, in which all the major animal phyla appear suddenly, lacking evident precursors in the Precambrian rocks). And indeed, if common ancestry fails, so does conventional evolutionary theory.
On the other hand, design proponents have argued that even if common ancestry holds, the evidence of intelligence in biology is compelling. Conservation of information is part of that second-prong challenge to evolution. Evolutionary theorists like Miller and Dawkins think that if they can break down the problem of evolving a complex biological system into a sequence of baby-steps, each of which is manageable by blind search (e.g., point mutations of DNA) and each of which confers a functional advantage, then the evidence of design vanishes. But it doesn't. Regardless of the evolutionary story told, conservation of information shows that the information in the final product had to be there from the start.
It would actually be quite a remarkable property of nature if fitness across biological configuration space were so distributed that advantages could be cumulated gradually by a Darwinian process. Frankly, I don't see the evidence for this. The examples that Miller cites show some small increases in information associated with recovering and enhancing a single biological function but hardly the massive ratcheting up of information in which structures and functions co-evolve and lead to striking instances of biological invention. The usual response to my skepticism is, Give evolution more time. I'm happy to do that, but even if time allows evolution to proceed much more impressively, the challenge that conservation of information puts to evolution remains.
In the field of technological (as opposed to biological) evolution, revolutionary new inventions never result by gradual tinkering with existing technologies. Existing technologies may, to be sure, be co-opted for use in a revolutionary technology. Thus, when Alexander Graham Bell invented the telephone, he used existing technologies such as wires, electrical circuits, and diaphragms. But these were put together and adapted for a novel, and at the time unprecedented, use.
But what if technological evolution proceeded in the same way that, as we are told, biological evolution proceeds, with inventions useful to humans all being accessible by gradual tinkering from one or a few primordial inventions? One consequence would be that tinkerers who knew nothing about the way things worked but simply understood what it was to benefit from a function could become inventors on the order of Bell and Edison. More significantly, such a state of affairs would also indicate something very special about the nature of human invention, namely, that it was distributed continuously across technological configuration space. This would be remarkable. Granted, we don't see this. Instead, we see sharply disconnected islands of invention inaccessible to one another by mere gradual tinkering. But if such islands were all connected (by long and narrow isthmuses of function), it would suggest a deeper design of technological configuration space for the facilitation of human invention.
The same would be true of biological invention. If biological evolution proceeds by a gradual accrual of functional advantages, instead of finding itself deadlocked on isolated islands of function surrounded by vast seas of non-function, then the fitness landscape over biological configuration space has to be very special indeed (recall Stuart Kauffman's comments to that effect earlier in this piece). Conservation of information goes further and says that any information we see coming out of the evolutionary process was already there in this fitness landscape or in some other aspect of the environment or was inserted by an intervening intelligence. What conservation of information guarantees did not happen is that the evolutionary process created this information from scratch.
Some years back I had an interesting exchange with Simon Conway Morris about the place of teleology in evolution. According to him, the information that guides the evolutionary process is embedded in nature and is not reducible to the Darwinian mechanism of selection, replication, and mutation. He stated this forthrightly in an email to me dated February 20, 2003, anticipating his then forthcoming book Life's Solution. I quote this email rather than the book because it clarifies his position better than anything that I've read from him subsequently. Here's the quote from his email:
As it happens, I am not sure we are so far apart, at least in some respects. Both of us, I imagine, accept that we are part of God's good Creation, and that despite its diversity, by no means all things are possible. In my forthcoming book Life's Solution (CUP) I argue that hard-wired into the universe are such biological properties of intelligence. This implies a "navigation" by evolution across immense "hyperspaces" of biological alternatives, nearly all of which are maladaptive [N.B. -- this means the adaptive hyperspaces form a very low-probability target!]. These thin roads (or "worm-holes") of evolution define a deeper biological structure, the principal evidence for which is convergence (my old story). History and platonic archetypes, if you like, meet. That does seem to me to be importantly distinct from ID: my view of Creation is not only very rich (self-evidently), but has an underlying structure that allows evolution to act. Natural selection, after all, is only a mechanism; what we surely agree about is the nature of the end-products, even if we disagree as to how they came about. Clearly my view is consistent with a Christian world picture, but can never be taken as proof.
There's not much I disagree with here. My one beef with Conway Morris is that he's too hesitant about finding evidence (what he calls "proof") for teleology in the evolutionary process. I critique this hesitancy in my review of Life's Solution for Books & Culture, a review that came out the year after this email (click here for the review). Conway Morris's fault is that he does not follow his position through to its logical conclusion. He prefers to critique conventional evolutionary theory, with its tacit materialism, from the vantage of theology and metaphysics. Convergence points to a highly constrained evolutionary process that's consistent with divine design. Okay, but there's more.
If evolution is so tightly constrained and the Darwinian mechanism of natural selection is just that, a mechanism, albeit one that "navigates immense hyperspaces of biological alternatives" by confining itself to "thin roads of evolution defining a deeper biological structure," then, in the language of conservation of information, the conditions that allow evolution to act effectively in producing the complexity and diversity of life is but a tiny subset, and therefore a small-probability target, among all the conditions under which evolution might act. And how did nature find just those conditions? Nature has, in that case, embedded in it not just a generic evolutionary process employing selection, replication, and mutation, but one that is precisely tuned to produce the exquisite adaptations, or, dare I say, designs, that pervade biology.
Where Conway Morris merely finds consistency with his Christian worldview (tempered by a merger of Darwin and Plotinus), conservation of information shows that the evolutionary process has embedded in it rich sources of information that a thoroughgoing materialism cannot justify and has no right to expect. The best such a materialism can do is count it a happy accident that evolution acts effectively, producing ever increasing biological complexity and diversity, when most ways it might act would be ineffective, producing no life at all or ecosystems that are boring (a disproportion mirrored in the evolutionary computing literature, where most fitness landscapes are maladaptive).
The Lesson of Conservation of Information
The improbabilities associated with rendering evolution effective are therefore no more tractable than the improbabilities that face an evolutionary process dependent purely on blind search. This is the relevance of conservation of information for evolution: it shows that the vast improbabilities that evolution is supposed to mitigate in fact never do get mitigated. Yes, you can reach the top of Mount Improbable, but the tools that enable you to find a gradual ascent up the mountain are as improbably acquired as simply scaling it in one fell swoop. This is the lesson of conservation of information.
One final question remains, namely, what is the source of information in nature that allows targets to be successfully searched? If blind material forces can only redistribute existing information, then where does the information that allows for successful search, whether in biological evolution or in evolutionary computing or in cosmological fine-tuning or wherever, come from in the first place? The answer will by now be obvious: from intelligence. On materialist principles, intelligence is not real but an epiphenomenon of underlying material processes. But if intelligence is real and has real causal powers, it can do more than merely redistribute information -- it can also create it.
Indeed, that is the defining property of intelligence, its ability to create information, especially information that finds needles in haystacks. This fact should be more obvious and convincing to us than any fact of the natural sciences since (1) we ourselves are intelligent beings who create information all the time through our thoughts and language and (2) the natural sciences themselves are logically downstream from our ability to create information (if we were not information creators, we could not formulate our scientific theories, much less search for those that are empirically adequate, and there would be no science). Materialist philosophy, however, has this backwards, making a materialist science primary and then defining our intelligence out of existence because materialism leaves no room for it. The saner course would be to leave no room for materialism.
I close with a quote from Descartes, whose substance dualism notwithstanding, rightly understood that intelligence could never be reduced to brute blind matter acting mechanistically. The quote is from his Discourse on Method. As you read it, bear in mind that for the materialist, everything is a machine, be it themselves, the evolutionary process, or the universe taken as a whole. Everything, for the materialist, is just brute blind matter acting mechanistically. Additionally, as you read this, bear in mind that conservation of information shows that this materialist vision is fundamentally incomplete, unable to account for the information that animates nature. Here is the quote:
Although machines can perform certain things as well as or perhaps better than any of us can do, they infallibly fall short in others, by which means we may discover that they did not act from knowledge, but only from the disposition of their organs. For while reason is a universal instrument which can serve for all contingencies, these organs have need of some special adaptation for every particular action. From this it follows that it is morally impossible that there should be sufficient diversity in any machine to allow it to act in all the events of life in the same way as our reason causes us to act.
No comments:
Post a Comment