Search This Blog

Sunday 6 November 2016

On the no free lunch principle re:information.

Conservation of Information Made Simple
William A. Dembski August 28, 2012 3:59 PM 

In the 1970s, Doubleday published a series of books with the title "Made Simple." This series covered a variety of academic topics (Statistics Made Simple, Philosophy Made Simple, etc.). The 1980s saw the "For Dummies" series, which expanded the range of topics to include practical matters such as auto repair. The "For Dummies" series has since been replicated, notably by guides for "Complete Idiots." All books in these series attempt, with varying degrees of success, to break down complex subjects, helping students to learn a topic, especially when they've been stymied by more conventional approaches and textbooks. 

In this article, I'm going to follow the example of these books, laying out as simply and clearly as I can what conservation of information is and why it poses a challenge to conventional evolutionary thinking. I'll break this concept down so that it seems natural and straightforward. Right now, it's too easy for critics of intelligent design to say, "Oh, that conservation of information stuff is just mumbo-jumbo. It's part of the ID agenda to make a gullible public think there's some science backing ID when it's really all smoke and mirrors." Conservation of information is not a difficult concept and once it is understood, it becomes clear that evolutionary processes cannot create the information required to power biological evolution.

Conservation of Information: A Brief History

Conservation of information is a term with a short history. Biologist Peter Medawar used it in the 1980s to refer to mathematical and computational systems that are limited to producing logical consequences from a given set of axioms or starting points, and thus can create no novel information (everything in the consequences is already implicit in the starting points). His use of the term is the first that I know, though the idea he captured with it is much older. Note that he called it the "Law of Conservation of Information" (see his The Limits of Science, 1984).

Computer scientist Tom English, in a 1996 paper, also used the term conservation of information, though synonymously with the then recently proved results by Wolpert and Macready about No Free Lunch (NFL). In English's version of NFL, "the information an optimizer gains about unobserved values is ultimately due to its prior information of value distributions." As with Medawar's form of conservation of information, information for English is not created from scratch but rather redistributed from existing sources.

Conservation of information, as the idea is being developed and gaining currency in the intelligent design community, is principally the work of Bob Marks and myself, along with several of Bob's students at Baylor (see the publications page at www.evoinfo.org). Conservation of information, as we use the term, applies to search. Now search may seem like a fairly restricted topic. Unlike conservation of energy, which applies at all scales and dimensions of the universe, conservation of information, in focusing on search, may seem to have only limited physical significance. But in fact, conservation of information is deeply embedded in the fabric of nature, and the term does not misrepresent its own importance.

Search is a very general phenomenon. The reason we don't typically think of search in broad terms applicable to nature generally is that we tend to think of it narrowly in terms of finding a particular predefined object. Thus our stock example of search is losing one's keys, with search then being the attempt to recover them. But we can also search for things that are not pre-given in this way. Sixteenth-century explorers were looking for new, uncharted lands. They knew when they found them that their search had been successful, but they didn't know exactly what they were looking for. U2 has a song titled "I Still Haven't Found What I'm Looking For." How will Bono know once he's found what he's looking for? Often we know that we've found it even though it's nothing like what we expected, and sometimes even violates our expectations.

Another problem with extending search to nature in general is that we tend to think of search as confined to human contexts. Humans search for keys, and humans search for uncharted lands. But, as it turns out, nature is also quite capable of search. Go to Google and search on the term "evolutionary search," and you'll get quite a few hits. Evolution, according to some theoretical biologists, such as Stuart Kauffman, may properly be conceived as a search (see his book Investigations). Kauffman is not an ID guy, so there's no human or human-like intelligence behind evolutionary search as far as he's concerned. Nonetheless, for Kauffman, nature, in powering the evolutionary process, is engaged in a search through biological configuration space, searching for and finding ever-increasing orders of biological complexity and diversity.

An Age of Search

Evolutionary search is not confined to biology but also takes place inside computers. The field of evolutionary computing (which includes genetic algorithms) falls broadly under that area of mathematics known as operations research, whose principal focus is mathematical optimization. Mathematical optimization is about finding solutions to problems where the solutions admit varying and measurable degrees of goodness (optimality). Evolutionary computing fits this mold, seeking items in a search space that achieve a certain level of fitness. These are the optimal solutions. (By the way, the irony of doing a Google "search" on the target phrase "evolutionary search," described in the previous paragraph, did not escape me. Google's entire business is predicated on performing optimal searches, where optimality is gauged in terms of the link structure of the web. We live in an age of search!)

If the possibilities connected with search now seem greater to you than they have in the past, extending beyond humans to computers and biology in general, they may still seem limited in that physics appears to know nothing of search. But is this true? The physical world is life-permitting -- its structure and laws allow (though they are far from necessitating) the existence of not just cellular life but also intelligent multicellular life. For the physical world to be life-permitting in this way, its laws and fundamental constants need to be configured in very precise ways. Moreover, it seems far from mandatory that those laws and constants had to take the precise form that they do. The universe itself, therefore, can be viewed as the solution to the problem of making life possible. But problem solving itself is a form of search, namely, finding the solution (among a range of candidates) to the problem.

Still, for many scientists, search fits uneasily in the natural sciences. Something unavoidably subjective and teleological seems involved in search. Search always involves a goal or objective, as well as criteria of success and failure (as judged by what or whom?) depending on whether and to what degree the objective has been met. Where does that objective, typically known as a target, come from other than from the minds of human inquirers? Are we, as pattern-seeking and pattern-inventing animals, simply imposing these targets/patterns on nature even though they have no independent, objective status?

This concern has merit, but it needs not to be overblown. If we don't presuppose a materialist metaphysics that makes mind, intelligence, and agency an emergent property of suitably organized matter, then it is an open question whether search and the teleology inherent in it are mere human constructions on the one hand, or, instead, realities embedded in nature on the other. What if nature is itself the product of mind and the patterns it exhibits reflect solutions to search problems formulated by such a mind?

Scientific inquiry that's free of prejudice and narrowly held metaphysical assumptions should, it seems, leave open both these possibilities. After all, the patterns we're talking about are not like finding a vague likeness of Santa Claus's beard in a cloud formation. Who, if they look hard enough, won't see Santa's beard? The fine-tuning of nature's laws and constants that permits life to exist at all is not like this. It is a remarkable pattern and may properly be regarded as the solution to a search problem as well as a fundamental feature of nature, or what philosophers would call a natural kind, and not merely a human construct. Whether an intelligence is responsible for the success of this search is a separate question. The standard materialist line in response to such cosmological fine-tuning is to invoke multiple universes and view the success of this search as a selection effect: most searches ended without a life-permitting universe, but we happened to get lucky and live in a universe hospitable to life.

In any case, it's possible to characterize search in a way that leaves the role of teleology and intelligence open without either presupposing them or deciding against them in advance. Mathematically speaking, search always occurs against a backdrop of possibilities (the search space), with the search being for a subset within this backdrop of possibilities (known as the target). Success and failure of search are then characterized in terms of a probability distribution over this backdrop of possibilities, the probability of success increasing to the degree that the probability of locating the target increases.

For example, consider all possible L-amino acid sequences joined by peptide bonds of length 100. This we can take as our reference class or backdrop of possibilities -- our search space. Within this class, consider those sequences that fold and thus might form a functioning protein. This, let us say, is the target. This target is not merely a human construct. Nature itself has identified this target as a precondition for life -- no living thing that we know can exist without proteins. Moreover, this target admits some probabilistic estimates. Beginning with the work of Robert Sauer, cassette mutagenesis and other experiments of this sort performed over the last three decades suggest that the target has probability no more than 1 in 10^60 (assuming a uniform probability distribution over all amino acid sequences in the reference class).

The mathematics characterizing search in this way is straightforward and general. Whether in specific situations a search so characterized also involves unavoidably subjective human elements or reflects objectively given realities embedded in nature can be argued independently of the mathematics. Such an argument speaks to the interpretation of the search, not to the search itself. Such an argument parallels controversies surrounding the interpretation of quantum mechanics: whether quantum mechanics is inherently a mind-based, observer-dependent theory; whether it can be developed independently of observers; whether it is properly construed as reflecting a deterministic, mind-independent, multiuniverse, etc. Quantum mechanics itself is a single, well-defined theory that admits several formulations, all of which are mathematically equivalent. Likewise, search as described here has a single, straightforward theoretical underpinning.

An Easter Egg Hunt, from the Scientific Vantage

One clarification is worth inserting here while we're still setting the stage for conservation of information. For most people, when it comes to search, the important thing is the outcome of the search. Take an Easter egg hunt. The children looking for Easter eggs are concerned with whether they find the eggs. From the scientific vantage, however, the important thing about search is not the particular outcomes but the probability distribution over the full range of possible outcomes in the search space (this parallels communication theory, in which what's of interest is not particular messages sent across a communication channel but the range of possible messages and their probability distribution). The problem with just looking at outcomes is that a search might get lucky and find the target even if the probabilities are against it.

Take an Easter egg hunt in which there's just one egg carefully hidden somewhere in a vast area. This is the target and blind search is highly unlikely to find it precisely because the search space is so vast. But there's still a positive probability of finding the egg even with blind search, and if the egg is discovered, then that's just how it is. It may be, because the egg's discovery is so improbable, that we might question whether the search was truly blind and therefore reject this (null) hypothesis. Maybe it was a guided search in which someone, with knowledge of the egg's whereabouts, told the seeker "warm, warmer, no colder, warmer, warmer, hot, hotter, you're burning up." Such guidance gives the seeker added information that, if the information is accurate, will help locate the egg with much higher probability than mere blind search -- this added information changes the probability distribution.

But again, the important issue, from a scientific vantage, is not how the search ended but the probability distribution under which the search was conducted. You don't have to be a scientist to appreciate this point. Suppose you've got a serious medical condition that requires treatment. Let's say there are two treatment options. Which option will you go with? Leaving cost and discomfort aside, you'll want the treatment with the better chance of success. This is the more effective treatment. Now, in particular circumstances, it may happen that the less effective treatment leads to a good outcome and the more effective treatment leads to a bad outcome. But that's after the fact. In deciding which treatment to take, you'll be a good scientist and go with the one that has the higher probability of success.

The Easter egg hunt example provides a little preview of conservation of information. Blind search, if the search space is too large and the number of Easter eggs is too small, is highly unlikely to successfully locate the eggs. A guided search, in which the seeker is given feedback about his search by being told when he's closer or farther from the egg, by contrast, promises to dramatically raise the probability of success of the search. The seeker is being given vital information bearing on the success of the search. But where did this information that gauges proximity of seeker to egg come from? Conservation of information claims that this information is itself as difficult to find as locating the egg by blind search, implying that the guided search is no better at finding the eggs than blind search once this information must be accounted for.

Conservation of Information in Evolutionary Biology

In the sequel, I will focus mainly on conservation of information as it applies to search in evolutionary biology (and by extension in evolutionary computing), trusting that once the case for conservation of information is made in biology, its scope and applicability for the rest of the natural sciences will be that much more readily accepted and acceptable. As it is, evolutionary biologists possessing the mathematical tools to understand search are typically happy to characterize evolution as a form of search. And even those with minimal knowledge of the relevant mathematics fall into this way of thinking.

Take Brown University's Kenneth Miller, a cell biologist whose knowledge of the relevant mathematics I don't know. Miller, in attempting to refute ID, regularly describes examples of experiments in which some biological structure is knocked out along with its function, and then, under selection pressure, a replacement structure is evolved that recovers the function. What makes these experiments significant for Miller is that they are readily replicable, which means that the same systems with the same knockouts will undergo the same recovery under the same suitable selection regime. In our characterization of search, we would say the search for structures that recover function in these knockout experiments achieves success with high probability.

Suppose, to be a bit more concrete, we imagine a bacterium capable of producing a particular enzyme that allows it to live off a given food source. Next, we disable that enzyme, not by removing it entirely but by, say, changing a DNA base in the coding region for this protein, thus changing an amino acid in the enzyme and thereby drastically lowering its catalytic activity in processing the food source. Granted, this example is a bit stylized, but it captures the type of experiment Miller regularly cites.

So, taking these modified bacteria, the experimenter now subjects them to a selection regime that starts them off on a food source for which they don't need the enzyme that's been disabled. But, over time, they get more and more of the food source for which the enzyme is required and less and less of other food sources for which they don't need it. Under such a selection regime, the bacterium must either evolve the capability of processing the food for which previously it needed the enzyme, presumably by mutating the damaged DNA that originally coded for the enzyme and thereby recovering the enzyme, or starve and die.

So where's the problem for evolution in all this? Granted, the selection regime here is a case of artificial selection -- the experimenter is carefully controlling the bacterial environment, deciding which bacteria get to live or die. But nature seems quite capable of doing something similar. Nylon, for instance, is a synthetic product invented by humans in 1935, and thus was absent from bacteria for most of their history. And yet, bacteria have evolved the ability to digest nylon by developing the enzyme nylonase. Yes, these bacteria are gaining new information, but they are gaining it from their environments, environments that, presumably, need not be subject to intelligent guidance. No experimenter, applying artificial selection, for instance, set out to produce nylonase.

To see that there remains a problem for evolution in all this, we need to look more closely at the connection between search and information and how these concepts figure into a precise formulation of conservation of information. Once we have done this, we'll return to the Miller-type examples of evolution to see why evolutionary processes do not, and indeed cannot, create the information needed by biological systems. Most biological configuration spaces are so large and the targets they present are so small that blind search (which ultimately, on materialist principles, reduces to the jostling of life's molecular constituents through forces of attraction and repulsion) is highly unlikely to succeed. As a consequence, some alternative search is required if the target is to stand a reasonable chance of being located. Evolutionary processes driven by natural selection constitute such an alternative search. Yes, they do a much better job than blind search. But at a cost -- an informational cost, a cost these processes have to pay but which they are incapable of earning on their own.

In the information-theory literature, information is usually characterized as the negative logarithm to the base two of a probability (or some logarithmic average of probabilities, often referred to as entropy). This has the effect of transforming probabilities into bits and of allowing them to be added (like money) rather than multiplied (like probabilities). Thus, a probability of one-eighths, which corresponds to tossing three heads in a row with a fair coin, corresponds to three bits, which is the negative logarithm to the base two of one-eighths. Such a logarithmic transformation of probabilities is useful in communication theory, where what gets moved across communication channels is bits rather than probabilities and the drain on bandwidth is determined additively in terms of number of bits. Yet, for the purposes of this "Made Simple" paper, we can characterize information, as it relates to search, solely in terms of probabilities, also cashing out conservation of information purely probabilistically.

Probabilities, treated as information used to facilitate search, can be thought of in financial terms as a cost -- an information cost. Think of it this way. Suppose there's some event you want to have happen. If it's certain to happen (i.e., has probability 1), then you own that event -- it costs you nothing to make it happen. But suppose instead its probability of occurring is less than 1, let's say some probability p. This probability then measures a cost to you of making the event happen. The more improbable the event (i.e., the smaller p), the greater the cost. Sometimes you can't increase the probability of making the event occur all the way to 1, which would make it certain. Instead, you may have to settle for increasing the probability to qwhere qis less than 1 but greater than p. That increase, however, must also be paid for. And in fact, we do pay to raise probabilities all the time. For instance, many students pay tuition costs to obtain a degree that will improve their prospects (i.e., probabilities) of landing a good, high-paying job.

A Fair Lottery

To illustrate this point more precisely, imagine that you are playing a lottery. Let's say it's fair, so that the government doesn't skim anything off the top (i.e., everything paid into the lottery gets paid out to the winner) and one ticket is sure to be the winner. Let's say a million lottery tickets have been purchased so far at one dollar apiece, exactly one of which is yours. Each lottery ticket therefore has the same probability of winning, so your lottery ticket has a one in a million chance of coming out on top (which is your present p value), entailing a loss of one dollar if you lose and nearly a million dollars if you win ($999,999 to be exact). Now let's say you really want to win this lottery -- for whatever reason you earnestly desire to hold the winning ticket in your hand. In that case, you can purchase additional tickets. By purchasing these, you increase your chance of winning the lottery. Let's say you purchase an additional million tickets at one dollar apiece. Doing so has now boosted your probability of winning the lottery from .000001 to .500001, or to about one-half.

Increasing the probability of winning the lottery has therefore incurred a cost. With a probability of roughly .5 of winning the lottery, you are now much more likely to gain approximately one million dollars. But it also cost you a million dollars to increase your probability of winning. As a result, your expected winnings, computed in standard statistical terms as the probability of losing multiplied by what you would lose subtracted from the probability of winning multiplied by what you would win, equals zero. Moreover, because this is a fair lottery, it equals zero when you only had one ticket purchased and it equals zero when you had an additional million tickets purchased. Thus, in statistical terms, investing more in this lottery has gained you nothing.

Conservation of information is like this. Not exactly like this because conservation of information focuses on search whereas the previous example focused on the economics of expected utility. But just as increasing your chances of winning a lottery by buying more tickets offers no real gain (it is not a long-term strategy for increasing the money in your pocket), so conservation of information says that increasing the probability of successful search requires additional informational resources that, once the cost of locating them is factored in, do nothing to make the original search easier.

To see how this works, let's consider a toy problem. Imagine that your search space consists of only six items, labeled 1 through 6. Let's say your target is item 6 and that you're going to search this space by rolling a fair die once. If it lands on 6, your search is successful; otherwise, it's unsuccessful. So your probability of success is 1/6. Now let's say you want to increase the probability of success to 1/2. You therefore find a machine that flips a fair coin and delivers item 6 to you if it lands heads and delivers some other item in the search space if it land tails. What a great machine, you think. It significantly boosts the probability of obtaining item 6 (from 1/6 to 1/2).

But then a troubling question crosses your mind: Where did this machine that raises your probability of success come from? A machine that tosses a fair coin and that delivers item 6 if the coin lands heads and some other item in the search space if it lands tails is easily reconfigured. It can just as easily deliver item 5 if it lands heads and some other item if it lands tails. Likewise for all the remaining items in the search space: a machine such as the one described can privilege any one of the six items in the search space, delivering it with probability 1/2 at the expense of the others. So how did you get the machine that privileges item 6? Well, you had to search among all those machines that flip coins and with probability 1/2 deliver a given item, selecting the one that delivers item 6 when it lands heads. And what's the probability of finding such a machine?

To keep things simple, let's imagine that our machine delivers item 6 with probability 1/2 and each of items 1 through 5 with equal probability, that is, with probability 1/10. Accordingly, this machine is one of six possible machines configured in essentially the same way. There's another machine that flips a coin, delivers item 1 from the original search space if it lands heads, and delivers any one of 2 through 6 with probability 1/10 each if the coin lands tails. And so on. Thus, of these six machines, one delivers item 6 with probability 1/2 and the remaining five machines deliver item 6 with probability 1/10. Since there are six machines, only one of which delivers item 6 (our target) with high probability, and since only labels and no intrinsic property distinguishes one machine from any other in this setup (the machines are, as mathematicians would say, isomorphic), the principle of indifference applies to these machines and prescribes that the probability of getting the machine that delivers item 6 with probability 1/2 is the same as that of getting any other machine, and is therefore 1/6.

But a probability of 1/6 to find a machine that delivers item 6 with probability 1/2 is no better than our original probability of 1/6 of finding the target simply by tossing a die. In fact, once we have this machine, we still have only a 50-50 chance of locating item 6. Finding this machine incurs a probability cost of 1/6, and once this cost is incurred we still have a probability cost of 1/2 of finding item 6. Since probability costs increase as probabilities decrease, we're actually worse off than we were at the start, where we simply had to roll a die that, with probability 1/6, locates item 6.

The probability of finding item 6 using this machine, once we factor in the probabilistic cost of securing the machine, therefore ends up being 1/6 x 1/2 = 1/12. So our attempt to increase the probability of finding item 6 by locating a more effective search for that item has actually backfired, making it in the end even more improbable that we'll find item 6. Conservation of information says that this is always a danger when we try to increase the probability of success of a search -- that the search, instead of becoming easier, remains as difficult as before or may even, as in this example, become more difficult once additional underlying information costs, associated with improving the search and often hidden, as in this case by finding a suitable machine, are factored in.

Why It Is Called "Conservation" of Information

The reason it's called "conservation" of information is that the best we can do is break even, rendering the search no more difficult than before. In that case, information is actually conserved. Yet often, as in this example, we may actually do worse by trying to improve the probability of a successful search. Thus, we may introduce an alternative search that seems to improve on the original search but that, once the costs of obtaining this search are themselves factored in, in fact exacerbate the original search problem.

In referring to ease and difficulty of search, I'm not being mathematically imprecise. Ease and difficulty, characterized mathematically, are always complexity-theoretic notions presupposing an underlying complexity measure. In this case, complexity is cashed out probabilistically, so the complexity measure is a probability measure, with searches becoming easier to the degree that successfully locating targets is more probable, and searches becoming more difficult to the degree that successfully locating targets is more improbable. Accordingly, it also makes sense to talk about the cost of a search, with the cost going up the more difficult the search, and the cost going down the easier the search.

In all these discussions of conservation of information, there's always a more difficult search that gets displaced by an easier search, but once the difficulty of finding the easier search (difficulty being understood probabilistically) is factored in, there's no gain, and in fact the total cost may have gone up. In other words, the actual probability of locating the target with the easier search is no greater, and may actually be less, than the probability of locating the target with the more difficult search once the probability of locating the easier search is factored in. All of this admits a precise mathematical formulation. Inherent in such a formulation is treating search itself as subject to search. If this sounds self-referential, it is. But it also makes good sense.

To see this, consider a treasure hunt. Imagine searching for a treasure chest buried on a large island. We consider two searches, a more difficult one and an easier one. The more difficult search, in this case, is a blind search in which, without any knowledge of where the treasure is buried, you randomly meander about the island, digging here or there for the treasure. The easier search, by contrast, is to have a treasure map in which "x marks the spot" where the treasure is located, and where you simply follow the map to the treasure.

But where did you get that treasure map? Mapmakers have made lots of maps of that island, and for every map that accurately marks the treasure's location, there are many many others that incorrectly mark its location. Indeed, for any place on the island, there's a map that marks it with an "x." So how do you find your way among all these maps to one that correctly marks the treasure's location? Evidently, the search for the treasure has been displaced to a search for a map that locates the treasure. Each map corresponds to a search, and locating the right map corresponds to a search for a search (abbreviated, in the conservation of information literature, as S4S).

Conservation of information, in this example, says that the probability of locating the treasure by first searching for a treasure map that accurately identifies the treasure's location is no greater, and may be less, than the probability of locating the treasure simply by blind search. This implies that the easier search (i.e., the search with treasure map in hand), once the cost of finding it is factored in, has not made the actual overall search any easier. In general, conservation of information says that when a more difficult search gets displaced by an easier search, the probability of finding the target by first finding the easier search and then using the easier search to find the target is no greater, and often is less, than the probability of finding the target directly with the more difficult search.

In the Spirit of "No Free Lunch"

Anybody familiar with the No Free Lunch (NFL) theorems will immediately see that conservation of information is very much in the same spirit. The upshot of the NFL theorems is that no evolutionary search outperforms blind search once the information inherent in fitness (i.e., the fitness landscape) is factored out. NFL is a great equalizer. It says that all searches are essentially equivalent to blind search when looked at not from the vantage of finding a particular target but when averaged across the different possible targets that might be searched.

If NFL tends toward egalitarianism by arguing that no search is, in itself, better than blind search when the target is left unspecified, conservation of information tends toward elitism by making as its starting point that some searches are indeed better than others (especially blind search) at locating particular targets. Yet, conservation of information quickly adds that the elite status of such searches is not due to any inherent merit of the search (in line with NFL) but to information that the search is employing to boost its performance.

Some searches do better, indeed much better, than blind search, and when they do, it is because they are making use of target-specific information. Conservation of information calculates the information cost of this performance increase and shows how it must be counterbalanced by a loss in search performance elsewhere (specifically, by needing to search for the information that boosts search performance) so that global performance in locating the target is not improved and may in fact diminish.

Conservation of information, in focusing on search for the information needed to boost search performance, suggests a relational ontology between search and objects being searched. In a relational ontology, things are real not as isolated entities but in virtue of their relation to other things. In the relational ontology between search and the objects being searched, each finds its existence in the other. Our natural tendency is to think of objects as real and search for those objects as less real in the sense that search depends on the objects being searched but objects can exist independently of search. Yet objects never come to us in themselves but as patterned reflections of our background knowledge, and thus as a target of search.

Any scene, indeed any input to our senses, reaches our consciousness only by aspects becoming salient, and this happens because certain patterns in our background knowledge are matched to the exclusion of others. In an extension of George Berkeley's "to be is to be perceived," conservation of information suggests that "to be perceived is to be an object of search." By transitivity of reasoning, it would then follow that to be is to be an object of search. And since search is always search for an object, search and the object of search become, in this way of thinking, mutually ontologizing, giving existence to each other. Conservation of information then adds to this by saying that search can itself be an object of search.

Most relational ontologies are formulated in terms of causal accessibility, so that what renders one thing real is its causal accessibility to another thing. But since search is properly understood probabilistically, the form of accessibility relevant to a relational ontology grounded in search is probabilistic. Probabilistic rather than causal accessibility grounds the relational ontology of search. Think of a needle in a haystack, only imagine the needle is the size of an electron and the haystack is the size of the known physical universe. Searches with such a small probability of success via blind or random search are common in biology. Biological configuration spaces of possible genes and proteins, for instance, are immense, and finding a functional gene or protein in such spaces via blind search can be vastly more improbable than finding an arbitrary electron in the known physical universe.

Why the Multiverse Is Incoherent

Given needles this tiny in haystacks this large, blind search is effectively incapable of finding a needle in a haystack. Success, instead, requires a search that vastly increases the probability of finding the needle. But where does such a search come from? And in what sense does the needle exist apart from such a search. Without a search that renders finding the needle probable, the needle might just as well not exist. And indeed, we would in all probability not know that it exists except for a search that renders it probable. This, by the way, is why I regard the multiverse as incoherent: what renders the known physical universe knowable is that it is searchable. The multiverse, by contrast, is unsearchable. In a relational ontology that makes search as real as the objects searched, the multiverse is unreal.

These considerations are highly germane to evolutionary biology, which treats evolutionary search as a given, as something that does not call for explanation beyond the blind forces of nature. But insofar as evolutionary search renders aspects of a biological configuration space probabilistically accessible where previously, under blind search, they were probabilistically inaccessible, conservation of information says that evolutionary search achieves this increase in search performance at an informational cost. Accordingly, the evolutionary search, which improves on blind search, had to be found through a higher-order search (i.e., a search for a search, abbreviated S4S), which, when taken into account, does not make the evolutionary search any more effective at finding the target than the original blind search.

Given this background discussion and motivation, we are now in a position to give a reasonably precise formulation of conservation of information, namely: raising the probability of success of a search does nothing to make attaining the target easier, and may in fact make it more difficult, once the informational costs involved in raising the probability of success are taken into account. Search is costly, and the cost must be paid in terms of information. Searches achieve success not by creating information but by taking advantage of existing information. The information that leads to successful search admits no bargains, only apparent bargains that must be paid in full elsewhere.

For a "Made Simple" paper on conservation of information, this is about as much as I want to say regarding a precise statement of conservation of information. Bob Marks and I have proved several technical conservation of information theorems (see the publications page at www.evoinfo.org). Each of these looks at some particular mathematical model of search and shows how raising the probability of success of a search by a factor of q/p (> 1) incurs an information cost not less than log(q/p), or, equivalently, a probability cost of not more than p/q. If we therefore start with a search having probability of success p and then raise it to q, the actual probability of finding the target is not q but instead is less than or equal to q multiplied by p/q, or, therefore, less than or equal to p, which is just the original search difficulty. Accordingly, raising the probability of success of a search contributes nothing toward finding the target once the information cost of raising the probability is taken into account.

Conservation of information, however, is not just a theorem or family of theorems but also a general principle or law (recall Medawar's "Law of Conservation of Information"). Once enough such theorems have been proved and once their applicability to a wide range of search problems has been repeatedly demonstrated (the Evolutionary Informatics Lab has, for instance, shown how such widely touted evolutionary algorithms as AVIDA, ev, Tierra, and Dawkins's WEASEL all fail to create but instead merely redistribute information), conservation of information comes to be seen not as a narrow, isolated result but as a fundamental principle or law applicable to search in general. This is how we take conservation of information.

Instead of elaborating the underlying theoretical apparatus for conservation of information, which is solid and has appeared now in a number of peer-reviewed articles in the engineering and mathematics literature (see the publications page at www.evoinfo.org -- it's worth noting that none of the critiques of this work has appeared in the peer-reviewed scientific/engineering literature, although a few have appeared in the philosophy of science literature, such as Philosophy and Biology; most of the critiques are Internet diatribes), I want next to illustrate conservation of information as it applies to one of the key examples touted by evolutionists as demonstrating the information-generating powers of evolutionary processes. Once I've done that, I want to consider what light conservation of information casts on evolution generally.

An Economist Is Stranded on an Island

To set the stage, consider an old joke about an economist and several other scientists who are stranded on an island and discover a can of beans. Hungry, they want to open it. Each looks to his area of expertise to open the can. The physicist calculates the trajectory of a projectile that would open the can. The chemist calculates the heat from a fire needed to burst the can. And so on. Each comes up with a concrete way to open the can given the resources on the island. Except the economist. The economist's method of opening the can is the joke's punch line: suppose a can opener. There is, of course, no can opener on the island.

The joke implies that economists are notorious for making assumptions to which they are unentitled. I don't know enough about economists to know whether this is true, but I do know that this is the case for many evolutionary biologists. The humor in the economist's proposed solution of merely positing a can opener, besides its jab at the field of economics, is the bizarre image of a can opener coming to the rescue of starving castaways without any warrant whatsoever for its existence. The economist would simply have the can opener magically materialize. The can opener is, essentially, a deus ex machina.

Interestingly, the field of evolutionary biology is filled with deus ex machinas (yes, I've taken Latin and know that this is not the proper plural of deus ex machina, which is dei ex machinis; but this is a "made simple" paper meant for the unwashed masses, of which I'm a card-carrying member). Only the evolutionary biologist is a bit more devious about employing, or should I say deploying, deus ex machinas than the economist. Imagine our economist counseling someone who's having difficulty repaying a juice loan to organized crime. In line with the advice he gave on the island, our economist friend might give the following counsel: suppose $10,000 in cash.

$10,000 might indeed pay the juice loan, but that supposition seems a bit crude. An evolutionary biologist, to make his advice appear more plausible, would add a layer of complexity to it: suppose a key to a safety deposit box with $10,000 cash inside it. Such a key is just as much a deus ex machina as the $10,000 in cash. But evolutionary biology has long since gained mastery in deploying such devices as well as gaining the right to call their deployment "science."

I wish I were merely being facetious, but there's more truth here than meets the eye. Consider Richard Dawkins' well known METHINKS IT IS LIKE A WEASEL example (from his 1986 book The Blind Watchmaker), an example endlessly repeated and elaborated by biologists trying to make evolution seem plausible, the most notable recent rendition being by RNA-worlds researcher Michael Yarus in his 2010 book Life from an RNA World (Yarus's target phrase, unlike Dawkins's, which is drawn from Shakespeare's Hamlet, is Theodosius Dozhansky's famous dictum NOTHING IN BIOLOGY MAKES SENSE EXCEPT IN THE LIGHT OF EVOLUTION).

A historian or literature person, confronted with METHINKS IT IS LIKE A WEASEL, would be within his rights to say, suppose that there was a writer named William Shakespeare who wrote it. And since the person and work of Shakespeare have been controverted (was he really a she? did he exist at all? etc.), this supposition is not without content and merit. Indeed, historians and literature people make such suppositions all the time, and doing so is part of what they get paid for. Are the Homeric poems the result principally of a single poet, Homer, or an elaboration by a tradition of bards? Did Moses write the Pentateuch or is it the composite of several textual traditions, as in the documentary hypothesis? Did Jesus really exist? (Dawkins and his fellow atheists seriously question whether Jesus was an actual figure of history; cf. the film The God Who Wasn't There).

For the target phrase METHINKS IT IS LIKE A WEASEL, Dawkins bypasses the Shakespeare hypothesis -- that would be too obvious and too intelligent-design friendly. Instead of positing Shakespeare, who would be an intelligence or designer responsible for the text in question (designers are a no-go in conventional evolutionary theory), Dawkins asks his readers to suppose an evolutionary algorithm that evolves the target phrase. But such an evolutionary algorithm privileges the target phrase by adapting the fitness landscape so that it assigns greater fitness to phrases that have more corresponding letters in common with the target.

And where did that fitness landscape come from? Such a landscape potentially exists for any phrase whatsoever, and not just for METHINKS IT IS LIKE A WEASEL. Dawkins's evolutionary algorithm could therefore have evolved in any direction, and the only reason it evolved to METHINKS IT IS LIKE A WEASEL is that he carefully selected the fitness landscape to give the desired result. Dawkins therefore got rid of Shakespeare as the author of METHINKS IT IS LIKE A WEASEL, only to reintroduce him as the (co)author of the fitness landscape that facilitates the evolution of METHINKS IT IS LIKE A WEASEL.

The bogusness of this example, with its sleight-of-hand misdirection, has been discussed ad nauseam by me and my colleagues in the ID community. We've spent so much time and ink on this example not because of its intrinsic merit, but because the evolutionary community itself remains so wedded to it and endlessly repeats its underlying fallacy in ever increasingly convoluted guises (AVIDA, Tierra, ev, etc.). For a careful deconstruction of Dawkins's WEASEL, providing a precise simulation under user control, see the "Weasel Ware" project on the Evolutionary Informatics website: www.evoinfo.org/weasel.

How does conservation of information apply to this example? Straightforwardly. Obtaining METHINKS IT IS LIKE A WEASEL by blind search (e.g., by randomly throwing down Scrabble pieces in a line) is extremely improbable. So Dawkins proposes an evolutionary algorithm, his WEASEL program, to obtain this sequence with higher probability. Yes, this algorithm does a much better job, with much higher probability, of locating the target. But at what cost? At an even greater improbability cost than merely locating the target sequence by blind search.

Dawkins completely sidesteps this question of information cost. Foreswearing any critical examination of the origin of the information that makes his simulation work, he attempts instead, by rhetorical tricks, simply to induce in his readers a stupefied wonder at the power of evolution: "Gee, isn't it amazing how powerful evolutionary processes are given that they can produce sentences like METHINKS IT IS LIKE A WEASEL, which ordinarily require human intelligence." But Dawkins is doing nothing more than advise our hapless borrower with the juice loan to suppose a key to a safety deposit box with the money needed to pay it off. Whence the key? Likewise, whence the fitness landscape that rendered the evolution of METHINKS IT IS LIKE A WEASEL probable? In terms of conservation of information, the necessary information was not internally created but merely smuggled in, in this case, by Dawkins himself.

An Email Exchange with Richard Dawkins

Over a decade ago, I corresponded with Dawkins about his WEASEL computer simulation. In an email to me dated May 5, 2000, he responded to my criticism of the teleology hidden in that simulation. Note that he does not respond to the challenge of conservation of information directly, nor had I developed this idea with sufficient clarity at the time to use it in refutation. More on this shortly. Here's what he wrote, exactly as he wrote it:

The point about any phrase being equally eligible to be a target is covered on page 7 [of The Blind Watchmaker]: "Any old jumbled collection of parts is unique and, WITH HINDSIGHT, is as improbable as any other . . ." et seq.
More specifically, the point you make about the Weasel, is admitted, without fuss, on page 50: "Although the monkey/Shakespeare model is useful for explaining the distinction between single-step selection and cumulative selection, it is misleading in important ways. One of these is that, in each generation of selective 'breeding', the mutant 'progeny' phrases were judged according to the criterion of resemblance to a DISTANT IDEAL target ... Life isn't like that."

In real life of course, the criterion for optimisation is not an arbitrarily chosen distant target but SURVIVAL. It's as simple as that. This is non-arbitrary. See bottom of page 8 to top of page 9. And it's also a smooth gradient, not a sudden leap from a flat plain in the phase space. Or rather it must be a smooth gradient in all those cases where evolution has actually happened. Maybe there are theoretical optima which cannot be reached because the climb is too precipitous.

The Weasel model, like any model, was supposed to make one point only, not be a complete replica of the real thing. I invented it purely and simply to counter creationists who had naively assumed that the phase space was totally flat except for one vertical peak (what I later represented as the precipitous cliff of Mount Improbable). The Weasel model is good for refuting this point, but it is misleading if it is taken to be a complete model of Darwinism. That is exactly why I put in the bit on page 50.

Perhaps you should look at the work of Spiegelman and others on evolution of RNA molecules in an RNA replicase environment. They have found that, repeatedly, if you 'seed' such a solution with an RNA molecule, it will converge on a particular size and form of 'optimal' replicator, sometimes called Spiegelman's minivariant. Maynard Smith gives a good brief account of it in his The Problems of Biology (see Spiegelman in the index). Orgel extended the work, showing that different chemical environments select for different RNA molecules.

The theory is so beautiful, so powerful. Why are you people so wilfully blind to its simple elegance? Why do you hanker after "design" when surely you must see that it doesn't explain anything? Now THAT's what I call a regress. You are a fine one to talk about IMPORTING complexity. "Design" is the biggest import one could possibly imagine.

Dawkins's email raises a number of interesting questions that, in the years since, have received extensive discussion among the various parties debating intelligent design. The who-designed-the-designer regress, whether a designing intelligence must itself be complex in the same way that biological systems are complex, the conditions under which evolution is complexity-increasing vs. complexity-decreasing, the evolutionary significance of Spiegelman's minivariants, and how the geometry of the fitness landscape facilitates or undercuts evolution have all been treated at length in the design literature and won't be rehearsed here (for more on these questions, see my books No Free Lunch and The Design Revolution as well as Michael Behe's The Edge of Evolution).
"Just One Word: Plastics"

Where I want to focus is Dawkins's one-word answer to the charge that his WEASEL simulation incorporates an unwarranted teleology -- unwarranted by the Darwinian understanding of evolution for which his Blind Watchmaker is an apologetic. The key line in the above quote is, "In real life of course, the criterion for optimisation is not an arbitrarily chosen distant target but SURVIVAL." Survival is certainly a necessary condition for life to evolve. If you're not surviving, you're dead, and if you're dead, you're not evolving -- period. But to call "survival," writ large, a criterion for optimization is ludicrous. As I read this, I have images of Dustin Hoffman in The Graduate being taken aside at a party by an executive who is about to reveal the secret of success: PLASTICS (you can watch the clip by clicking here). For the greatest one-word simplistic answers ever given, Dawkins's ranks right up there.

But perhaps I'm reading Dawkins uncharitably. Presumably, what he really means is differential survival and reproduction as governed by natural selection and random variation. Okay, I'm willing to buy that this is what he means. But even on this more charitable reading, his characterization of evolution is misleading and wrong. Ken Miller elaborates on this more charitable reading in his recent book Only a Theory. There he asks what's needed to drive the increase in biological information over the course of evolution. His answer? "Just three things: selection, replication, and mutation... Where the information 'comes from' is, in fact, from the selective process itself."

It's easy to see that Miller is blowing smoke even without the benefits of modern information theory. All that's required is to understand some straightforward logic, uncovered in Darwin's day, about the nature of scientific explanation in teasing apart possible causes. Indeed, biology's reception of Darwinism might have been far less favorable had scientists paid better attention to Darwin's contemporary John Stuart Mill. In 1843, sixteen years before the publication of Darwin's Origin of Species, Mill published the first edition of his System of Logic (which by the 1880s had gone through eight editions). In that work Mill lays out various methods of induction. The one that interests us here is his method of difference. In his System of Logic, Mill described this method as follows:

If an instance in which the phenomenon under investigation occurs, and an instance in which it does not occur, have every circumstance in common save one, that one occurring only in the former; the circumstance in which alone the two instances differ is the effect, or the cause, or an indispensable part of the cause, of the phenomenon.
Essentially, this method says that to discover which of a set of circumstances is responsible for an observed difference in outcomes requires finding a difference in the circumstances. An immediate corollary is that common circumstances cannot explain a difference in outcomes. Thus, if one person is sober and another drunk, and if both ate chips, salsa, and popcorn, this fact, common to both, does not, and indeed cannot, explain the difference. Rather, the difference is explained by one abstaining from alcohol and the other drinking too much. Mill's method of difference, so widely used in everyday life as well as in science, is crucially relevant to evolutionary biology. In fact, it helps bring some sense of proportion and reality to the inflated claims so frequently made on behalf of Darwinian processes.
Case in point: Miller's overselling of Darwinian evolution by claiming that "what's needed to drive" increases in biological information is "just three things: selection, replication, and mutation." Mill's method of difference gives the lie to Miller's claim. It's easy to write computer simulations that feature selection, replication, and mutation (or SURVIVAL writ large, or differential survival and reproduction, or any such reduction of evolution to Darwinian principles) -- and that go absolutely nowhere. Taken together, selection, replication, and mutation are not a magic bullet, and need not solve any interesting problems or produce any salient patterns. That said, evolutionary computation does get successfully employed in the field of optimization, so it is possible to write computer simulations that feature selection, replication, and mutation and that do go somewhere, solving interesting problems or producing salient patterns. But precisely because selection, replication, and mutation are common to all such simulations, they cannot, as Mill's method underscores, account for the difference.

One Boeing engineer used to call himself a "penalty-function artist." A penalty function is just another term for fitness landscape (though the numbers are reversed -- the higher the penalty, the lower the fitness). Coming up with the right penalty functions enabled this person to solve his engineering problems. Most such penalty functions, however, are completely useless. Moreover, all such functions operate within the context of an evolutionary computing environment that features Miller's triad of selection, replication, and mutation. So what makes the difference? It's that the engineer, with knowledge of the problem he's trying to solve, carefully adapts the penalty function to the problem and thereby raises the probability of successfully finding a solution. He's not just choosing his penalty functions willy-nilly. If he did, he wouldn't be working at Boeing. He's an artist, and his artistry (intelligent design) consists in being able to find the penalty functions that solve his problems.

I've corresponded with both Miller and Dawkins since 2000. Miller and I have sparred on a number of occasions in public debate (as recently as June 2012, click here). Dawkins refuses all such encounters. Regardless, we are familiar with each other's work, and yet I've never been able to get from either of them a simple admission that the logic in Mill's method of difference is valid and that it applies to evolutionary theory, leaving biology's information problem unresolved even after the Darwinian axioms of selection, replication, and variation are invoked.

John Stuart Mill's Inconvenient Truth

Instead, Miller remains an orthodox Darwinist, and Dawkins goes even further, embracing a universal Darwinism that sees Darwinian evolution as the only conceivable scientific explanation of life's diversification in natural history. As he wrote in The Blind Watchmaker and continues to believe:

My argument will be that Darwinism is the only known theory that is in principle capable of explaining certain aspects of life. If I am right it means that, even if there were no actual evidence in favor of the Darwinian theory (there is, of course) we should still be justified in preferring it over all rival theories.
Mill's method of difference is an inconvenient truth for Dawkins and Miller, but it's a truth that must be faced. For his willingness to face this truth, I respect Stuart Kauffman infinitely more than either Miller or Dawkins. Miller and Dawkins are avid Darwinists committed to keeping the world safe for their patron saint. Kauffman is a free spirit, willing to admit problems where they arise. Kauffman at least sees that there is a problem in claiming that the Darwinian mechanism can generate biological information, even if his own self-organizational approach is far from resolving it. As Kauffman writes in Investigations:
If mutation, recombination, and selection only work well on certain kinds of fitness landscapes, yet most organisms are sexual, and hence use recombination, and all organisms use mutation as a search mechanism, where did these well-wrought fitness landscapes come from, such that evolution manages to produce the fancy stuff around us?
According to Kauffman, "No one knows."
Kauffman's observation here is entirely in keeping with conservation of information. Indeed, he offers this observation in the context of discussing the No Free Lunch theorems, of which conservation of information is a logical extension. The fitness landscape supplies the evolutionary process with information. Only finely tuned fitness landscapes that are sufficiently smooth, don't isolate local optima, and, above all, reward ever-increasing complexity in biological structure and function are suitable for driving a full-fledged evolutionary process. So where do such fitness landscapes come from? Absent an extrinsic intelligence, the only answer would seem to be the environment.

Just as I have heard SURVIVAL as a one-word resolution to the problem of generating biological information, so also have I heard ENVIRONMENT. Ernan McMullin, for instance, made this very point to me over dinner at the University of Chicago in 1999, intoning this word ("environment") as though it were the solution to all that ails evolution. Okay, so the environment supplies the information needed to drive biological evolution. But where did the environment get that information? From itself? The problem with such an answer is this: conservation of information entails that, without added information, biology's information problem remains constant (breaks even) or intensifies (gets worse) the further back in time we trace it.

The whole magic of evolution is that it's supposed to explain subsequent complexity in terms of prior simplicity, but conservation of information says that there never was a prior state of primordial simplicity -- the information, absent external input, had to be there from the start. It is no feat of evolutionary theorizing to explain how cavefish lost the use of their eyes after long periods of being deprived of light. Functioning eyes turning into functionless eye nubs is a devolution from complexity to simplicity. As a case of use-it-or-lose-it, it does not call for explanation. Evolution wins plaudits for purporting to explain how things like eyes that see can evolve in the first place from prior simpler structures that cannot see.

If the evolutionary process could indeed create such biological information, then evolution from simplicity to complexity would be unproblematic. But the evolutionary process as conceived by Darwin and promulgated by his successors is non-teleological. Accordingly, it cannot employ the activity of intelligence in any guise to increase biological information. But without intelligent input, conservation of information implies that as we regress biological information back in time, the amount of information to be accounted for never diminishes and may actually increase.

Explaining Walmart's Success by Invoking Interstate Highways

Given conservation of information and the absence of intelligent input, biological information with the complexity we see now must have always been present in the universe in some form or fashion, going back even as far as the Big Bang. But where in the Big Bang, with a heat and density that rule out any life form in the early history of the universe, is the information for life's subsequent emergence and development on planet Earth? Conservation of information says this information has to be there, in embryonic form, at the Big Bang and at every moment thereafter. So where is it? How is it represented? In the environment, you say? Invoking the environment as evolution's information source is empty talk, on the order of invoking the interstate highway system as the reason for Walmart's business success. There is some connection, to be sure, but neither provides real insight or explanation.

To see more clearly what's at stake here, imagine Scrabble pieces arranged in sequence to spell out meaningful sentences (such as METHINKS IT IS LIKE A WEASEL). Suppose a machine with suitable sensors, movable arms, and grips, takes the Scrabble pieces out of a box and arranges them in this way. To say that the environment has arranged the Scrabble pieces to spell out meaningful sentences is, in this case, hardly illuminating. Yes, broadly speaking, the environment is arranging the pieces into meaningful sentences. But, more precisely, a robotic machine, presumably running a program with meaningful sentences suitably coded, is doing the arranging.

Merely invoking the environment, without further amplification, therefore explains nothing about the arrangement of Scrabble pieces into meaningful sentences. What exactly is it about the environment that accounts for the information conveyed in those arrangements of Scrabble pieces? And what about the environment accounts for the information conveyed in the organization of biological systems? That's the question that needs to be answered. Without an answer to this question, appeals to the environment are empty and merely cloak our ignorance of the true sources of biological information.

With a machine that arranges Scrabble pieces, we can try to get inside it and see what it does ("Oh, there's the code that spells out METHINKS IT IS LIKE A WEASEL"). With the actual environment for biological evolution, we can't, as it were, get under the hood of the car. We see natural forces such as wind, waves, erosion, lightning, Brownian motion, attraction, repulsion, bonding affinities and the like. And we see slippery slopes on which one organism thrives and another founders. If such an environment were arranging Scrabble pieces in sequence, we would observe the pieces blown by wind or jostled by waves or levitated by magnets. And if, at the end of the day, we found Scrabble pieces spelling out coherent English sentences, such as METHINKS IT IS LIKE A WEASEL, we would be in our rights to infer that an intelligence had in some way co-opted the environment and inserted information, even though we have no clue how.

Such a role for the environment, as an inscrutable purveyor of information, is, however, unacceptable to mainstream evolutionary theorists. In their view, the way the environment inputs information into biological systems over the course of evolution is eminently scrutable. It happens, so they say, by a gradual accumulation of information as natural selection locks in on small advantages, each of which can arise by chance without intelligent input. But what's the evidence here?

This brings us back to the knock-out experiments that Ken Miller has repeatedly put forward to refute intelligent design, in which a structure responsible for a function has been disabled and then, through selection pressure, it, or something close to it capable of the lost function, gets recovered. In all his examples, there is no extensive multi-step sequence of structural changes each of which lead to a distinct functional advantage. Usually, it's just a single nucleotide base or amino acid change that's needed to recover function.

This is true even with the evolution of nylonase, mentioned earlier. Nylonase is not the result of an entirely new DNA sequence coding for that enzyme. Rather, it resulted from a frameshift in existing DNA, shifting over some genetic letters and thus producing the gene for nylonase. The origin of nylonase is thus akin to changing the meaning of "therapist" by inserting a space and getting "the rapist." For the details about the evolution of nylonase, see a piece I did in response to Miller at Uncommon Descent (click here).

The Two-Pronged Challenge of Intelligent Design

Intelligent design has always mounted a two-pronged challenge to conventional evolutionary theory. On the one hand, design proponents have challenged common ancestry. Discontinuities in the fossil record and in supposed molecular phylogenies have, for many of us (Michael Behe has tended to be the exception), made common ancestry seem far from compelling. Our reluctance here is not an allergic reaction but simply a question of evidence -- many of us in the ID community see the evidence for common ancestry as weak, especially when one leaves the lower taxonomic groupings and moves to the level of orders, classes, and, above all, phyla (as with the Cambrian explosion, in which all the major animal phyla appear suddenly, lacking evident precursors in the Precambrian rocks). And indeed, if common ancestry fails, so does conventional evolutionary theory.

On the other hand, design proponents have argued that even if common ancestry holds, the evidence of intelligence in biology is compelling. Conservation of information is part of that second-prong challenge to evolution. Evolutionary theorists like Miller and Dawkins think that if they can break down the problem of evolving a complex biological system into a sequence of baby-steps, each of which is manageable by blind search (e.g., point mutations of DNA) and each of which confers a functional advantage, then the evidence of design vanishes. But it doesn't. Regardless of the evolutionary story told, conservation of information shows that the information in the final product had to be there from the start.

It would actually be quite a remarkable property of nature if fitness across biological configuration space were so distributed that advantages could be cumulated gradually by a Darwinian process. Frankly, I don't see the evidence for this. The examples that Miller cites show some small increases in information associated with recovering and enhancing a single biological function but hardly the massive ratcheting up of information in which structures and functions co-evolve and lead to striking instances of biological invention. The usual response to my skepticism is, Give evolution more time. I'm happy to do that, but even if time allows evolution to proceed much more impressively, the challenge that conservation of information puts to evolution remains.

In the field of technological (as opposed to biological) evolution, revolutionary new inventions never result by gradual tinkering with existing technologies. Existing technologies may, to be sure, be co-opted for use in a revolutionary technology. Thus, when Alexander Graham Bell invented the telephone, he used existing technologies such as wires, electrical circuits, and diaphragms. But these were put together and adapted for a novel, and at the time unprecedented, use.

But what if technological evolution proceeded in the same way that, as we are told, biological evolution proceeds, with inventions useful to humans all being accessible by gradual tinkering from one or a few primordial inventions? One consequence would be that tinkerers who knew nothing about the way things worked but simply understood what it was to benefit from a function could become inventors on the order of Bell and Edison. More significantly, such a state of affairs would also indicate something very special about the nature of human invention, namely, that it was distributed continuously across technological configuration space. This would be remarkable. Granted, we don't see this. Instead, we see sharply disconnected islands of invention inaccessible to one another by mere gradual tinkering. But if such islands were all connected (by long and narrow isthmuses of function), it would suggest a deeper design of technological configuration space for the facilitation of human invention.

The same would be true of biological invention. If biological evolution proceeds by a gradual accrual of functional advantages, instead of finding itself deadlocked on isolated islands of function surrounded by vast seas of non-function, then the fitness landscape over biological configuration space has to be very special indeed (recall Stuart Kauffman's comments to that effect earlier in this piece). Conservation of information goes further and says that any information we see coming out of the evolutionary process was already there in this fitness landscape or in some other aspect of the environment or was inserted by an intervening intelligence. What conservation of information guarantees did not happen is that the evolutionary process created this information from scratch.

Some years back I had an interesting exchange with Simon Conway Morris about the place of teleology in evolution. According to him, the information that guides the evolutionary process is embedded in nature and is not reducible to the Darwinian mechanism of selection, replication, and mutation. He stated this forthrightly in an email to me dated February 20, 2003, anticipating his then forthcoming book Life's Solution. I quote this email rather than the book because it clarifies his position better than anything that I've read from him subsequently. Here's the quote from his email:

As it happens, I am not sure we are so far apart, at least in some respects. Both of us, I imagine, accept that we are part of God's good Creation, and that despite its diversity, by no means all things are possible. In my forthcoming book Life's Solution (CUP) I argue that hard-wired into the universe are such biological properties of intelligence. This implies a "navigation" by evolution across immense "hyperspaces" of biological alternatives, nearly all of which are maladaptive [N.B. -- this means the adaptive hyperspaces form a very low-probability target!]. These thin roads (or "worm-holes") of evolution define a deeper biological structure, the principal evidence for which is convergence (my old story). History and platonic archetypes, if you like, meet. That does seem to me to be importantly distinct from ID: my view of Creation is not only very rich (self-evidently), but has an underlying structure that allows evolution to act. Natural selection, after all, is only a mechanism; what we surely agree about is the nature of the end-products, even if we disagree as to how they came about. Clearly my view is consistent with a Christian world picture, but can never be taken as proof.
There's not much I disagree with here. My one beef with Conway Morris is that he's too hesitant about finding evidence (what he calls "proof") for teleology in the evolutionary process. I critique this hesitancy in my review of Life's Solution for Books & Culture, a review that came out the year after this email (click here for the review). Conway Morris's fault is that he does not follow his position through to its logical conclusion. He prefers to critique conventional evolutionary theory, with its tacit materialism, from the vantage of theology and metaphysics. Convergence points to a highly constrained evolutionary process that's consistent with divine design. Okay, but there's more.
If evolution is so tightly constrained and the Darwinian mechanism of natural selection is just that, a mechanism, albeit one that "navigates immense hyperspaces of biological alternatives" by confining itself to "thin roads of evolution defining a deeper biological structure," then, in the language of conservation of information, the conditions that allow evolution to act effectively in producing the complexity and diversity of life is but a tiny subset, and therefore a small-probability target, among all the conditions under which evolution might act. And how did nature find just those conditions? Nature has, in that case, embedded in it not just a generic evolutionary process employing selection, replication, and mutation, but one that is precisely tuned to produce the exquisite adaptations, or, dare I say, designs, that pervade biology.

Where Conway Morris merely finds consistency with his Christian worldview (tempered by a merger of Darwin and Plotinus), conservation of information shows that the evolutionary process has embedded in it rich sources of information that a thoroughgoing materialism cannot justify and has no right to expect. The best such a materialism can do is count it a happy accident that evolution acts effectively, producing ever increasing biological complexity and diversity, when most ways it might act would be ineffective, producing no life at all or ecosystems that are boring (a disproportion mirrored in the evolutionary computing literature, where most fitness landscapes are maladaptive).

The Lesson of Conservation of Information

The improbabilities associated with rendering evolution effective are therefore no more tractable than the improbabilities that face an evolutionary process dependent purely on blind search. This is the relevance of conservation of information for evolution: it shows that the vast improbabilities that evolution is supposed to mitigate in fact never do get mitigated. Yes, you can reach the top of Mount Improbable, but the tools that enable you to find a gradual ascent up the mountain are as improbably acquired as simply scaling it in one fell swoop. This is the lesson of conservation of information.

One final question remains, namely, what is the source of information in nature that allows targets to be successfully searched? If blind material forces can only redistribute existing information, then where does the information that allows for successful search, whether in biological evolution or in evolutionary computing or in cosmological fine-tuning or wherever, come from in the first place? The answer will by now be obvious: from intelligence. On materialist principles, intelligence is not real but an epiphenomenon of underlying material processes. But if intelligence is real and has real causal powers, it can do more than merely redistribute information -- it can also create it.

Indeed, that is the defining property of intelligence, its ability to create information, especially information that finds needles in haystacks. This fact should be more obvious and convincing to us than any fact of the natural sciences since (1) we ourselves are intelligent beings who create information all the time through our thoughts and language and (2) the natural sciences themselves are logically downstream from our ability to create information (if we were not information creators, we could not formulate our scientific theories, much less search for those that are empirically adequate, and there would be no science). Materialist philosophy, however, has this backwards, making a materialist science primary and then defining our intelligence out of existence because materialism leaves no room for it. The saner course would be to leave no room for materialism.

I close with a quote from Descartes, whose substance dualism notwithstanding, rightly understood that intelligence could never be reduced to brute blind matter acting mechanistically. The quote is from his Discourse on Method. As you read it, bear in mind that for the materialist, everything is a machine, be it themselves, the evolutionary process, or the universe taken as a whole. Everything, for the materialist, is just brute blind matter acting mechanistically. Additionally, as you read this, bear in mind that conservation of information shows that this materialist vision is fundamentally incomplete, unable to account for the information that animates nature. Here is the quote:

Although machines can perform certain things as well as or perhaps better than any of us can do, they infallibly fall short in others, by which means we may discover that they did not act from knowledge, but only from the disposition of their organs. For while reason is a universal instrument which can serve for all contingencies, these organs have need of some special adaptation for every particular action. From this it follows that it is morally impossible that there should be sufficient diversity in any machine to allow it to act in all the events of life in the same way as our reason causes us to act.

The rise of the machines? II

How Likely Is a "Terminator" Future?
Brendan Dixon 

Celebrity scientist Michio Kaku is the latest to throw his support behind the "Terminator is coming" mantra. From the story at CNBC:

The moment that humanity is forced to take the threat of artificial intelligence seriously might be fast approaching, according to futurist and theoretical physicist Michio Kaku.

In an interview with CNBC's "The Future of Us," Kaku drew concern from the earlier-than-expected victory Google's deep learning machine notched this past March, in which it was able to beat a human master of the ancient board game Go. Unlike chess, which features far fewer possible moves, Go allows for more moves than there are atoms in the universe, and thus cannot be mastered by the brute force of computer simulation.

"This machine had to have something different, because you can't calculate every known atom in the universe -- it has learning capabilities," Kaku said. "That's what's novel about this machine, it learns a little bit, but still it has no self awareness ... so we have a long way to go."

But that self awareness might not be far off, according to tech minds like Elon Musk and Stephen Hawking, who have warned it should be avoided for the sake of future human survival.

And while Kaku agreed that accelerating advances in artificial intelligence could present a dilemma for humanity, he was hesitant to predict such a problem would evolve in his lifetime. "By the end of the century this becomes a serious question, but I think it's way too early to run for the hills," he said.

"I think the 'Terminator' idea is a reasonable one -- that is that one day the Internet becomes self-aware and simply says that humans are in the way," he said. "After all, if you meet an ant hill and you're making a 10-lane super highway, you just pave over the ants. It's not that you don't like the ants, it's not that you hate ants, they are just in the way."

Unlike others, Kaku is cautious in suggesting that few if any of us will live long enough to actually see the Terminator arise. Fears of our own creation coming to life are as old as history, from Golems through Frankenstein's monster, to, now, the ascent of sentient computers. The publicized successes of Artificial Intelligence and our deep faith in technology spur this fear's most recent form.

But should they?

Kaku makes the case that something significant took place when DeepMind's AlphaGo beat Lee Sedol, considered one of the strongest Go players in the world, in March. Go is a computationally intractable game; that is, the game is too big for a computer, even one the size of the physical universe, to win through sheer brute force (i.e., by trying every conceivable position). To create a winning machine, DeepMind's developers had to design heuristics capable of taking on ranked players.

Prior game-playing systems built heuristics using known rules and strategies, but, since even the best Go players cannot articulate why they make the moves they do, encoding rules and strategies for Go has led to only moderate success. DeepMind's breakthrough came in creating a Neural Network that "learned" from prior play what good moves and good board positions looked like. It's the ability to learn that Kaku believes puts us on the path, ultimately, to the Terminator.

AlphaGo used two sets of so-called Neural Networks to help it evaluate the board and select a move. A Neural Network learns, through controlled training, by adjusting the strength of the connections between the nodes in the network. Think of it as a grid of points with strings connecting a point to its nearest neighbors. Learning consists of adjusting how much tension each point puts on its neighbors through those strings, pulling the entire grid into a shape corresponding to the pattern the programmers want to detect.

Programmers do not know the correct tension values to properly match a pattern. So, instead, they build into the network a mathematical feedback system that allows each point in the grid to adjust the tension it gives its neighbors as the network succeeds and fails at detecting the desired pattern.

Creating Neural Networks that work is hard; they do not always succeed. Sometimes small changes in the pattern will cause the network to fail. Sometimes the training plateaus or oscillates rather than converging on working tensions. Creating networks that successfully matched patterns to win at Go took very clever programming and skill.

"Learning," then, is a loose term. The "learning" a Neural Network undergoes is a very far cry from what it means when we learn. All it means is that, using procedures developed by clever programmers, the system self-adjusts. To leap from self-adjusting programs to Terminator-style computers paving over humans that just happen to get in the way is not grounded in the data. It is a leap of faith worthy of a committed mystic.

The real problem is in what such leaps obscure. AlphaGo-like systems behave in ways that, because they self-adjust, we cannot predict. Because we cannot predict their behavior, we cannot know how, or when, a system will fail.

Making a bad decision in a game of Go is not threatening to humanity. But putting such systems in control where human life or safety is at stake does matter. And we do ourselves no favor worrying about a future that is nothing more than a statement of faith while the real problem lies closer at hand with the encroaching use of so-called artificially intelligent machines controlling critical systems. The final behavior of those systems is best left in the minds and hands of the only intelligent agents we know of: humans.

Rise of the machines?

On I.D and expectations.

Horns of a Dilemma: Does Intelligent Design Do Too Little -- or Too Much?
Evolution News & Views 

An irony about intelligent design is that it is attacked from, so to speak, front and behind. Some, including theistic evolutionists, criticize ID's minimalism -- it declines to name a designer, to describe the act of design (so that you could picture it happening), to say when or how often the design is instantiated in life, among other things.

So goes the complaint. Let's see here.

When: Different scientific fields tell us different things. Astronomy doesn't tell us when the earth formed. But geology does. That doesn't mean astronomy is less a science because it can tell us things that geology cannot. ID tells us whether something was designed or whether it arose via material causes. ID doesn't tell you when the designer acted. But other fields can. Fields like geology (dating methods), paleontology (looking at fossils), or molecular biology (molecular clock methods) can potentially tell you when the designer acted to implement some design.

How often: As we learn more and more about where we should detect design, and as other fields tell us when that design happened, we can begin to get a handle on "how often" the designer acted. So this question is definitely not off limits to intelligent design and ID can help address it.

Identity of the designer: True, ID doesn't tell you who the designer is. That is because the scientific evidence doesn't tell us. This is a good example of ID . respecting the limits of science  Some see it as a weakness of ID. In fact, it's a strength. As William Dembski has said , "This is not a matter of being vague but rather of not pretending to knowledge that we don't have."

We're accustomed to Darwinists saying things they don't know (scientifically) to be true. That's doesn't mean we get to say things that we don't know (scientifically) to be true.

In a special irony, many theistic evolutionists tout methodological naturalism, criticizing ID for supposedly bringing God into science. These same individuals then pivot and complain that ID fails to identify the designer as God.

Meanwhile, design advocates are slammed for maximalism, or worse. Much worse. A friend shares with us these choice comments:

Evolutionary biologist Massimo Pigliucci (2002): ID is "bent on literally destroying science as we know it."

Physicist Marshall Berman (2005): "The current Intelligent Design movement poses a threat to all of science and perhaps to secular democracy itself."

Science journalist Robyn Williams (2006): "ID is, in a way, terrorism."

Whoa. So which is it, folks? Does ID do too little -- or too much? And why the hysteria?

ID may be limited, but if it can show that even one feature in living things is designed by an intelligence (no matter when,where, or how), the whole edifice of materialism collapses. That's why Darwinists are terrified. They cannot allow an intelligent foot in the door.


As for our theistic evolutionary friends, well, they've abandoned the principle of non-contradiction. Everything and nothing follows from that.

Saturday 5 November 2016

A simple lifeform ?II

Kamikaze cells wage biowarfare and fight viruses with viruses
By Michael Le Page

Giants, self-sacrifice, biological warfare: this story has them all. A voracious marine predator plagued by a giant virus has a defence system we’ve never seen before – it fights back by making its very own virus.

The individuals that make these bioweapons sacrifice themselves for the greater good, saving their fellow predators in the process.

The single-celled predator, Cafeteria roenbergensis, is common in coastal waters around the world, where it snacks on bacteria (the biologists who discovered it in 1988 near the Danish town of Roenbjerg sat discussing their find in the local… yes, you guessed it).

But Cafeteria has a deadly enemy of its own, the giant CroV virus.

Most viruses are little more than a protein shell encapsulating a handful of genes. They depend entirely on the machinery of the cells they infect to make more copies of themselves.

But giant viruses, discovered only in 2003, are more like living cells than normal viruses. They have the machinery to make proteins, which means they are vulnerable to viral attack themselves. For example, maviruses infect CroVs, forcing them to make more maviruses instead of CroVs, as Matthias Fischer, now at the Max Planck Institute in Germany, discovered in 2011.

That, of course, is good news for Cafeteria, because mavirus halts the spread of CroV.

And Cafeteria has evolved to exploit the concept that the enemy of my enemy is my friend. Rather than waiting for maviruses to arrive by chance when CroVs attack, it actually carries the genes that code for mavirus inside its own genome.

These genes are usually dormant, but they get turned on when Cafeteria is invaded by CroV. “It acts as an inducible antiviral defence system,” write Fischer and his colleague Thomas Hackl in a new preprint paper.

The infected Cafeteria cell still dies – but when it breaks apart it releases maviruses rather than CroVs, preventing the spread of the infection. This, then, is altruistic behaviour, which turns out to be surprisingly common among microbes. For instance, some bacteria kill themselves as soon as they are infected by viruses to prevent the infection spreading.

Other microbes form spore-bearing structures, with the cells making the stalk sacrificing themselves to give the spore-forming cells at the top a chance of surviving.

Bioweapons at the ready
Cafeteria may not be the only animal to use living bioweapons to defend itself. A wide range of animals, from sea anemones to crocodiles, harbour genetic elements called Maverick transposons that closely resemble the mavirus genes. It’s possible that some of these organisms can also unleash viruses that attack giant viruses.

It is common for viral genes to end up inside the genomes of animals. In fact, our genomes are littered with the mutant remains of viruses and genetic parasites.

Many viruses deliberately insert their genes into the genomes of the animals they attack, so they can lie dormant and emerge when conditions are favourable. In response, most animals have evolved ways of shutting down genes that code for viruses.

It is, however, extremely unusual for an animal to deliberately trigger virus production, as Cafeteria does – but then mavirus is unusual, too, because it targets another virus rather than Cafeteria itself.

What is common is for genes that originally came from viruses to be co-opted for new purposes. Genes of viral origin play a key role during pregnancy, for instance.

And some bacteria have “spearguns” that they use to attack other bacteria. These spearguns evolved from the apparatus that bacteria-attacking viruses use to inject their genes into their victims.


Journal reference: Biorxiv, DOI: 10.1101/068312

On physics' search for a theory of everything.

Should physics even try to converge on a grand unified theory?
Posted by News under Cosmology, Intelligent Design

From Manjit Kumar at Physics World , reviewing Peter Watson’s Convergence: the Deepest Idea in the Universe, , expresses some caution about that:

Wherever experimental evidence can be coaxed out of nature, it suffices to corroborate or refute a theory and serves as the sole arbiter of validity. But where evidence is sparse or absent, other criteria, including aesthetic ones, have been allowed to come into play – both in formulating a theory and evaluating it. Watson believes that because of this, in some ways “physics has become mathematics”, arguing that we are currently “living in an in-between time, and have no way of knowing whether many of the ideas current in physics will endure and be supported by experiment”.

This, Watson explains, deeply worries the likes of cosmologists Joseph Silk and George Ellis. At the end of 2014, Silk and Ellis argued in a Nature comment piece that some scientists appear to have “explicitly set aside” the need for experimental confirmation of our most ambitious theories, “so long as those theories are sufficiently elegant and explanatory”. They further complain that we are at the end of an era, “breaking with centuries of philosophical tradition” of defining scientific knowledge as empirical.

As Silk and Ellis point out, this situation has come about because particle physicists have struggled to go beyond the Standard Model. Their most prominent attempt has been the theory of supersymmetry, but the problem is that no supersymmetric particles have been found, and Silk and Ellis fear that its advocates will simply “retune” their models “to predict particles at masses beyond the reach of the LHC’s power of detection”.More.


Put simply, the war on falsifiability advances.

On darwin's defenders.

Robert Richards and Evolutionary Apologetics
Evolution News & Views 

Evolutionary apologetics is the defense of Darwinian theory against all challenges -- scientific and otherwise. That Darwinism has not coincidentally been put to evil ends, while not in itself evidence of invalid science, would seem indisputable.

Its role in shaping Nazi ideology would also seem clear enough to anyone who has read a little about the subject. Because Hitler's Germany can't be topped for evil, the defense of Darwinism must have a refutation of the Darwin-Hitler connection.

Over at the website This View of Life, promising "Anything and everything from an evolutionary perspective," SUNY Binghamton biologist David Sloan Wilson wraps up a series of essays by various scholars seeking "A New Social Darwinism." Wilson writes:

Truth and reconciliation for Social Darwinism involves acknowledging the misuse of evolutionary theory, but it also involves acknowledging false accusations and the omission of benign uses of evolutionary theory.

From an apologetic perspective, those "false accusations" to be dealt with must include the stain of Hitler, an "invented history." Invented? To show as much the series features an essay -- "Was Hitler a Darwinian? No! No! No!" -- by University of Chicago historian of science Robert Richards that takes shots at our colleagues Richard Weikart and David Berlinski, the film Expelled, and the "gossamer logic" of the "Intelligent Design crowd."

Plenty of other scholars have recognized Hitler's Darwinism, however crude and derivative. Richards mentions Hannah Arendt, John Gray, and the otherwise "astute historian" Peter Bowler, notably absent from the ranks of the "Intelligent Design crowd."

In any event, Professor Weikart has already dealt with Dr. Richards in a series of posts here:

"'Was Hitler a Darwinian?' Reviewing Robert Richards"

"Ignoring Evidence, Caricaturing Critics: Robert J. Richards's Was Hitler a Darwinian?"

"Is Robert Richards Right to Deny that Hitler Was a Darwinian?"

"Why My Critics Care So Much About the Darwin-Hitler Connection"

Why all the clamor to erase the Darwin-Hitler link? Weikart is characteristically astute:

[W]hy do they care about this at all? If they believe, as many do, that morality is simply "an illusion fobbed off on us by our genes," as evolutionary biologist E.O. Wilson and philosopher Michael Ruse famously put it, then what makes the illusions of some people superior to Hitler's illusions? Why do everything possible -- even denying obvious historical facts -- to obscure the historical linkages between Darwin and Hitler? I have a hunch that at some level they recognize that their evolutionary account of morality is inconsistent with reality.

As to the facts, Richards "misquotes and/or ignores the context of quotations," "ignores mountains of evidence," "caricatures the positions of those he disagrees with," "conflates certain key concepts," " totally ignores many of the most salient points I set forth in my books," "even creates a new historical 'fact.'"


Quite simply, evolutionary apologetics must have its own historical alternative reality. The defense of Darwin demands it, and so the dish is served.

Michael Behe v. the critics

Irreducible Complexity and the Evolutionary Literature: A Response to Critics

Michael Behe 


Editor's note: In celebration of the 20th anniversary of biochemist Michael Behe's pathbreaking book Darwin's Black Box and the release of the new documentary Revolutionary: Michael Behe and the Mystery of Molecular Machines, we are highlighting some of Behe's "greatest hits." The following was published by Discovery Institute on July 31, 2000. Remember to get your copy of Revolutionary now! See the trailer here.


I. Summary

Although several persons have cited numerous references from the scientific literature purporting to show that the problem of irreducible complexity I pointed out in Darwin's Black Box is being seriously addressed, the references show no such thing. Invariably the cited papers or books either deal with non-irreducibly complex biochemical systems, or do not deal with them in enough detail for critical evaluation. I strongly emphasize, however, that I do not prefer it that way. I would sincerely welcome much more serious, sustained research in the area of irreducible complexity. I fully expect such research would heighten awareness of the difficulties of Darwinian evolution.

II. Web Spinners

The necessary starting point of Darwin's Black Box was the contention that, despite the common assumption that natural selection accounts for adaptive complexity, the origins of many intricate cellular systems have not yet been explained in Darwinian terms. After all, if the systems have already been explained, then there's no need to write. While most scientist-reviewers disagreed (often emphatically) with my proposal of intelligent design, most also admitted to a lack of Darwinian explanations. For example, microbiologist James Shapiro of the University of Chicago declared in National Review that "There are no detailed Darwinian accounts for the evolution of any fundamental biochemical or cellular system, only a variety of wishful speculations." (Shapiro 1996) In Nature University of Chicago evolutionary biologist Jerry Coyne stated, "There is no doubt that the pathways described by Behe are dauntingly complex, and their evolution will be hard to unravel. . . . [W]e may forever be unable to envisage the first proto-pathways." (Coyne 1996)

In a particularly scathing review in Trends in Ecology and Evolution Tom Cavalier-Smith, an evolutionary biologist at the University of British Columbia, nonetheless wrote, "For none of the cases mentioned by Behe is there yet a comprehensive and detailed explanation of the probable steps in the evolution of the observed complexity. The problems have indeed been sorely neglected -- though Behe repeatedly exaggerates this neglect with such hyperboles as 'an eerie and complete silence.'" (Cavalier-Smith 1997) Evolutionary biologist Andrew Pomiankowski agreed in New Scientist, "Pick up any biochemistry textbook, and you will find perhaps two or three references to evolution. Turn to one of these and you will be lucky to find anything better than 'evolution selects the fittest molecules for their biological function.'" (Pomiankowski 1996) In American Scientist Yale molecular biologist Robert Dorit averred, "In a narrow sense, Behe is correct when he argues that we do not yet fully understand the evolution of the flagellar motor or the blood clotting cascade." (Dorit 1997)

A prominent claim I made in Darwin's Black Box is that, not only are irreducibly complex biochemical systems unexplained, there have been very few published attempts even to try to explain them. This contention has been vigorously disputed not so much by scientists in the relevant fields as by Darwinian enthusiasts on the Internet. Several web-savvy fans of natural selection have set up extensive, sophisticated sites that appear to receive a significant amount of notice. They influence college students, reporters, and, sometimes, academic reviewers of my book such as Cal State-Fullerton biochemist Bruce Weber, who lists the addresses of the websites in his review in Biology and Philosophy as "summaries of the current research that Behe either missed or misrepresented" (Weber 1999), and Oxford physical chemist Peter Atkins, who writes:

Dr. Behe claims that science is largely silent on the details of molecular evolution, the emergence of complex biochemical pathways and processes that underlie the more traditional manifestations of evolution at the level of organisms. Tosh! There are hundreds, possibly thousands, of scientific papers that deal with this very subject. For an entry into this important and flourishing field, and an idea of the intense scientific effort that it represents (see the first link above) [sic]. (Atkins 1998)

The link Atkins refers to is a website called "Behe's Empty Box" that has been set up by a man named John Catalano, an admirer of Oxford biologist Richard Dawkins (his larger site is devoted to Dawkins' work, schedule, etc.). The Empty Box site is, I think, actually a valuable resource, containing links to many reviews, comments and other material, both critical and favorable, related to my book. One subsection of the site is entitled "Alive and Published," and contains citations to a large number of papers and books which Catalano believes belie my claim that "There has never been a meeting, or a book, or a paper on details of the evolution of complex biochemical systems." (Behe 1996) (p. 179) The citations were solicited on the web from anyone who had a suggestion, and then compiled by Catalano.

Something, however, seems to be amiss. The assertion here that very many papers have been published clashes with statements of the reviews I quoted earlier which say, for example, that "The problems have indeed been sorely neglected." (Cavalier-Smith 1997) Would reviewers such as Jerry Coyne and Tom Cavalier-Smith -- both antagonistic to my proposal of intelligent design -- be unaware of the "hundreds, possibly thousands, of scientific papers that deal with this very subject"? Both claims -- that the problems have been neglected and that the problems are being actively investigated -- cannot be correct. Either one set of reviewers is wrong, or there is some confusion about which publications to count. Which is it?

In the context of my book it is easy to realize that I meant there has been little work on the details of the evolution of irreducibly complex biochemical systems by Darwinian means. I had clearly noted that of course a large amount of work in many books and journals was done under the general topic of "molecular evolution," but that, overwhelmingly, it was either limited to comparing sequences (which, again, does not concern the mechanism of evolution) or did not propose sufficiently detailed routes to justify a Darwinian conclusion. Yet the Catalano site lists virtually any work on evolution, whether it pertains to irreducible complexity or not. For example it lists semi-popular books such as Patterns in Evolution: The New Molecular View by Roger Lewin, and general textbooks on molecular evolution such as Molecular Evolution by Wen-Hsiung Li.

Such books simply don't address the problems I raise. Molecular Evolution by Wen-Hsiung Li (Li 1997) is a fine textbook which does an admirable job of explicating current knowledge of how genes change with time. That knowledge, however, does not include how specific, irreducibly-complex biochemical systems were built. The text contains chapters on the molecular clock, molecular phylogenetics, and other topics which essentially are studies in comparing gene sequences. As I explained in Darwin's Black Box, comparing sequences is interesting but cannot explain how molecular machines arose. Li's book also contains chapters on the mechanisms (such as gene duplication, domain shuffling, and concerted evolution of multigene families) that are thought to be involved in evolution at the molecular level. Again, however, no specific system is justified in Darwinian terms.

Here is an illustration of the problem. Li spends several pages discussing domain shuffling in the proteins of the blood-clotting cascade (Li 1997). However, Li himself has not done work on understanding how the obstacles to the evolution of the clotting cascade may have been circumvented. Since those investigators who do work in that area have not yet published a detailed Darwinian pathway in the primary literature1, we can conclude that the answer will not be found in a more general text. We can further assume that the processes that text describes (gene duplication, etc.), although very significant, are not by themselves sufficient to understand how clotting, or by extension any complex biochemical system, may have arisen by Darwinian means.

Catalano's site lists other books that I specifically discussed in Darwin's Black Box, where I noted that, while they present mathematical models or brief general descriptions, they do not present detailed biochemical studies of specific irreducibly complex systems. (Gillespie 1991; Selander et al. 1991) There is no explanation on Catalano's website of why he thinks they address the questions I raised. The site also points to papers with intriguing titles, but which are studies in sequence analysis, such as "Molecular evolution of the vertebrate immune system" (Hughes and Yeager 1997) and "Evolution of chordate actin genes: evidence from genomic organization and amino acid sequences." (Kusakabe et al. 1997) As I explained in Darwin's Black Box, sequence studies by themselves can't answer the question of what the mechanism of evolution is. Catalano's compendium also contains citations to papers concerning the evolution of non-irreducibly complex systems, such as hemoglobin and metabolic pathways, which I specifically said may have evolved by natural selection. (Behe 1996) (pp. 150-151; 206-207)

III. Equivocal Terms

Another website that has drawn attention (as evidenced from the inquiries I receive soliciting my reaction to it) is authored by David Ussery (Ussery 1999), associate research professor of biotechnology at The Technical University of Denmark. One of his main goals is to refute my claim concerning the dearth of literature investigating the evolution of irreducibly complex systems. For example, in a section on intracellular vesicular transport he notes that I stated in Darwin's Black Box that a search of a computer database "to see what titles have both evolution and vesicle in them comes up completely empty." (Behe 1996) (p. 114) My search criterion, of having both words in the title, was meant to be a rough way to show that nothing much has been published on the subject. Ussery, however, writes that, on the contrary, a search of the PubMed database using the words evolution and vesicle identifies well over a hundred papers. Confident of his position, he urges his audience, "But, please, don't just take my word for it -- have a look for yourself!" (Ussery 1999)

The problem is that, as I stated in the book, I had restricted my search to the titles of papers, where occurrence of both words would probably mean they concerned the same subject. Ussery's search used the default PubMed setting, which also looks in abstracts.2 By doing so he picked up papers such as "Outbreak of nosocomial diarrhea by Clostridium difficile in a department of internal medicine." (Ramos et al. 1998) This paper discusses the "clinical evolution" (i.e., course of development) of diarrhea in hospitalized patients, who also had "vesicle catherization." Not only do the words evolution and vesicle in this paper not refer to each other, the paper does not even use the words evolution and vesicle in the same sense as I did. Since the word evolution has many meanings, and since the word vesicle can mean just a container (like the word "box"), Ussery picked up equivocal meanings.

The paper cited above shows Ussery's misstep in an obvious way. However, there are other papers resulting from an Ussery-style search where, although they do not address the question I raised, the unrelatedness is not so obvious to someone outside the field. An example of a paper that is harder for someone outside the field to evaluate is "Evolution of the trappin multigene family in the Suidae." (Furutani et al. 1998) The authors examine the protein and gene sequences for a group of secretory proteins (the trappin family) which "have undergone rapid evolution" and are similar to "seminal vesicle clotting proteins." The results may be interesting, but the seminal vesicle is a pouch in the male reproductive tract for storing semen -- not at all the same thing as the vesicle in which intracellular transport occurs. And trappins are not involved in intracellular transport.

A second example is "Syntaxin-16, a putative Golgi t-SNARE." (Simonsen et al. 1998) This paper actually does concern a protein involved in intracellular vesicular transport. However, as the abstract states, "Database searches identified putative yeast, plant and nematode homologues of syntaxin-16, indicating that this protein is conserved through evolution." The database searches are sequence comparisons. Once again I reiterate, sequence comparisons by themselves cannot tell us how a complex system might have arisen by Darwinian means.

Instead of listing further examples let me just say that I have not seen a paper using Ussery's search criteria that addresses the Darwinian evolution of intracellular vesicular transport in a detailed manner, as I had originally asserted in my book.

It is impossible for me to individually address the "hundreds, possibly thousands" of papers listed in these websites. But perhaps I don't have to. If competent scientists who are not friendly to the idea of intelligent design nonetheless say that "There are no detailed Darwinian accounts for the evolution of any fundamental biochemical or cellular system, only a variety of wishful speculations," (Shapiro 1996) and that "We may forever be unable to envisage the first proto-pathways" (Coyne 1996), then it is unlikely that much literature exists on these problems. So after considering the contents of the websites, we can reconcile the review of Peter Atkins with those of other reviewers. Yes, there are a lot of papers published on "molecular evolution," as I had clearly acknowledged in Darwin's Black Box. But very few of them concern Darwinian details of irreducibly complex systems, which is exactly the point I was making.

IV. Kenneth Miller

In Finding Darwin's God (Miller 1999) Kenneth Miller is also anxious to show my claims about the literature are not true (or at least are not true now, since the handful of papers he cites in his section "The Sound of Silence" were published after my book appeared). Yet none of the papers he cites deals with irreducibly complex systems.

The first paper Miller discusses concerns two structurally-similar enzymes, both called isocitrate dehydrogenase. The main difference between the two is simply that one uses the organic cofactor NAD while the other uses NADP. The two cofactors are very similar, differing only in the presence or absence of a phosphate group. The authors of the study show that by mutating several residues in either enzyme, they can change the specificity for NAD or NADP. (Dean and Golding 1997) Although the study is very interesting, at the very best it is microevolution of a single protein, not an irreducibly complex system.

The next paper Miller cites concerns "antifreeze" proteins. (Logsdon and Doolittle 1997) Again, these are single proteins that do not interact with other components; they are not irreducibly complex. In fact, they are great examples of what I agree evolution can indeed do -- start with a protein that accidentally binds something (ice nuclei in this case, maybe antibiotics in another case) and select for mutations that improve that property. But they don't shed light on irreducibly complex systems.

Another paper Miller cites concerns the cytochrome c oxidase proton pump (Musser and Chan 1998), which is involved in electron transfer. In humans six proteins take part in the function; in some bacteria fewer proteins are involved. While quite interesting, the mechanism of the system is not known in enough detail to understand what's going on; it remains in large part a black box. Further, the function of electron transfer does not necessarily require multiple protein components, so it is not necessarily irreducibly complex. Finally, the study is not detailed enough to criticize, saying things such as "It makes evolutionary sense that the cytochrome bc1and cytochrome c oxidase complexes arose from a primitive quinol terminal oxidase complex via a series of beneficial mutations." In order to judge whether natural selection could do the job, we have to know what the "series of beneficial mutations" is. Otherwise it's like saying that a five-part mousetrap arose from a one-part mousetrap by a series of beneficial mutations.3

Finally Miller discusses a paper which works out a scheme for how the organic-chemical components of the tricarboxylic acid (TCA) cycle, a central metabolic pathway, may have arisen gradually. (Melendez-Hevia et al. 1996) There are several points to make about it. First, the paper deals with the chemical interconversion of organic molecules, not the enzymes of the pathway or their regulation. As an analogy, suppose someone described how petroleum is refined step by step, beginning with crude oil, passing through intermediate grades, and ending with, say, gasoline. He shows that the chemistry of the processes is smooth and continuous, yet says nothing about the actual machinery of the refinery or its regulation, nothing about valves or switches. Clearly that is inadequate to show refining of petroleum developed step by step. Analogously, someone who is seriously interested in showing that a metabolic pathway could evolve by Darwinian means has to deal with the enzymic machinery and its regulation.

The second and more important point is that, while the paper is very interesting, it doesn't address irreducible complexity. Either Miller hasn't read what I said in my book about metabolic pathways, or he is deliberately ignoring it. I clearly stated in Darwin's Black Box metabolic pathways are not irreducibly complex (Behe 1996) (pp. 141-142; 150-151), because components can be gradually added to a previous pathway. Thus metabolic pathways simply aren't in the same category as the blood clotting cascade or the bacterial flagellum. Although Miller somehow misses the distinction, other scientists do not. In a recent paper Thornhill and Ussery write that something they call serial-direct-Darwinian-evolution "cannot generate irreducibly complex structures." But they think it may be able to generate a reducible structure, "such as the TCA cycle (Behe, 1996 a, b)." (Thornhill and Ussery 2000) In other words Thornhill and Ussery acknowledge the TCA cycle is not irreducibly complex, as I wrote in my book. Miller seems unable or unwilling to grasp that point.

V. A Plea for More Research

In pointing out that not much research has been done on the Darwinian evolution of irreducibly complex biochemical systems I should emphasize that I do not prefer it that way. I would sincerely welcome more research (especially experimental research, such as done by Barry Hall -- see my discussion of Hall's work in the essay on the "acid test" at this website) into the supposed Darwinian origins of the complex systems I described in my book. I fully expect that, as in the field of origin-of-life studies, the more we know, the more difficult the problem will be recognized to be.

References:

Atkins, P. W. (1998). Review of Michael Behe, Darwin's Black Box. http://www.infidels.org/library/modern/peter_atkins/behe.html
 .

Behe, M. J. (1996). Darwin's black box: the biochemical challenge to evolution. (The Free Press: New York.)

Cavalier-Smith, T. (1997). The blind biochemist. Trends in Ecology and Evolution 12, 162-163.

Coyne, J. A. (1996). God in the details. Nature 383, 227-228.

Dean, A. M. and Golding, G. B. (1997). Protein engineering reveals ancient adaptive replacements in isocitrate dehydrogenase. Proc.Natl.Acad.Sci.U.S.A 94, 3104-3109.

Dorit, R. (1997). Molecular evolution and scientific inquiry, misperceived. American Scientist 85, 474-475.

Furutani, Y., Kato, A., Yasue, H., Alexander, L. J., Beattie, C. W., and Hirose, S. (1998). Evolution of the trappin multigene family in the Suidae. J.Biochem. (Tokyo) 124, 491-502.

Gillespie, J. H. (1991). The causes of molecular evolution. (Oxford University Press: New York.)

Hughes, A. L. and Yeager, M. (1997). Molecular evolution of the vertebrate immune system. Bioessays 19, 777-786.

Kusakabe, T., Araki, I., Satoh, N., and Jeffery, W. R. (1997). Evolution of chordate actin genes: evidence from genomic organization and amino acid sequences. Journal of Molecular Evolution 44, 289-298.

Li, W. H. (1997). Molecular evolution. (Sinauer Associates: Sunderland, Mass.)

Logsdon, J. M., Jr. and Doolittle, W. F. (1997). Origin of antifreeze protein genes: a cool tale in molecular evolution. Proc.Natl.Acad.Sci.U.S.A 94, 3485-3487.

Melendez-Hevia, E., Waddell, T. G., and Cascante, M. (1996). The puzzle of the Krebs citric acid cycle: assembling the pieces of chemically feasible reactions, and opportunism in the design of metabolic pathways during evolution. Journal of Molecular Evolution 43 , 293-303.

Miller, K. R. (1999). Finding Darwin's God: a scientist's search for common ground between God and evolution. (Cliff Street Books: New York.)

Musser, S. M. and Chan, S. I. (1998). Evolution of the cytochrome c oxidase proton pump. Journal of Molecular Evolution 46, 508-520.

Pomiankowski, A. The God of the tiny gaps. New Scientist. 9-14-1996.

Ramos, A., Gazapo, T., Murillas, J., Portero, J. L., Valle, A., and Martin, F. (1998). Outbreak of nosocomial diarrhea by Clostridium difficile in a department of internal medicine. Enfermedades Infecciosas Y Microbiologia Clinica 16, 66-69.

Selander, R. K., Clark, A. G., and Whittam, T. S. (1991). Evolution at the molecular level. (Sinauer Associates: Sunderland, Mass.)

Shapiro, J. In the details . . . what? National Review, 62-65. 9-16-1996.

Simonsen, A., Bremnes, B., Ronning, E., Aasland, R., and Stenmark, H. (1998). Syntaxin-16, a putative Golgi t-SNARE. European Journal of Cell Biology 75, 223-231.

Thornhill, R. H. and Ussery, D. W. (2000). A classification of possible routes of Darwinian evolution. Journal of Theoretical Biology 203, 111-116.

Ussery, David (1999). A biochemist's response to "The Biochemical Challenge to Evolution".  http://www.indiana.edu/~ensiweb/behe.rev.html
.

Weber, Bruce (1999). Irreducible complexity and the problem of biochemical emergence. Biology & Philosophy 14, 593-605.

Notes:

(1) See my essay on blood clotting  this website
.

(2) In a later version of his review (the website has been updated several times, making it a moving target that is hard to pin down precisely), Ussery did note explicitly that one needed to search abstracts as well as titles to come up with the total of 130 papers. He then noted that a total of just four papers have both words in the title. These papers were not picked up in my search because they either were published after my search was completed in 1995, or because the papers were published before the mid 1980s (which is outside the scope of a CARL search). None of the papers affects the questions discussed in this manuscript.

(3) See my discussion of "mousetrap evolution" on  this website.
.

On the royal society's peek under the hood re:Darwinism.

The Road to the Royal Society: The Problems That Matter, the Problems That Don't
Paul Nelson

Starting next Monday, November 7, the Royal Society (RS) will convene a three-day meeting   at its London headquarters that has the potential to rival -- for historical significance --   the (in)famous 1980 Field Museum gathering  on macroevolution, or the  1966 Wistar symposium on mathematical challenges to the neo-Darwinian interpretation of evolution. Structured to include open-ended roundtable discussions, the RS meeting is premised on the view that current textbook evolutionary theory falls far short of what it needs to explain, and that mechanisms and processes outside its customary purview require careful attention.

Like Sherlock Holmes's dog that did not bark in the night, however, the RS meeting is noteworthy for those speakers who were not invited. We do not mean the obvious heretics or ID bad guys, such as Mike Behe, Doug Axe, or Steve Meyer. Their absence from the program is entirely predictable, if one understands that the paradigm actually controlling the boundaries of admissible scientific dissent is not neo-Darwinian evolution, or a scientific theory at all, but the underlying philosophy of materialism or naturalism.

No, the noteworthy uninvited scientists look on casual inspection to be completely respectable, even highly distinguished. Cambridge University paleontologist Simon Conway Morris, for instance, or University of Zurich evolutionary biologist Andreas Wagner, both of whom have written extensively about how neo-Darwinian theory requires revision, are conspicuously absent from the program. Here is a speculation as to why.

Even One Part Per Billion of Teleology is One Part Too Much

Let's start with Wagner. Over the past decade, Wagner has challenged the sufficiency of neo-Darwinian theory, mainly on the grounds that random or undirected changes to any complex functional system are far likelier to end up lost in enormous non-functional regions than they are to land on the very much smaller neighborhoods where novel function or structure occur. In 2011, Wagner wrote:

...we know few of the principles that explain the ability of living things to innovate through a combination of natural selection and random genetic change. Random change by itself is not sufficient, because it does not necessarily bring forth beneficial phenotypes. For example, random change might not be suitable to improve most man-made, technological systems. Similarly, natural selection alone is not sufficient: As the geneticist Hugo de Vries already noted in 1905, 'natural selection may explain the survival of the fittest, but it cannot explain the arrival of the fittest.'"

This criticism of the neo-Darwinian premise of random change should be familiar: one finds the objection featured prominently, for example, in the arguments of the 1966 Wistar participants, not to mention the writings of ID theorists since the early 1980s. Functional complexity and randomness stand fundamentally at odds with each other. If you doubt this, ask yourself if you would like to fly on a passenger jet that had undergone, let's say, one dozen unknown random changes to its flight control system. ("But hey, we're going to take off anyway!" said the demented pilot over the intercom cheerfully, as everyone made for the exits.)

What is probably less familiar is Wagner's solution to the problem of randomness. Probabilistically favored paths, he argues, must exist through sequence and function space, to enable evolutionary processes to move from one novel island to another within the time available -- and those paths must have been built into the universe from the start. As Wagner writes at the conclusion of his 2014 book, The Arrival of the Fittest, "life's creativity draws from a source that is older than life, and perhaps older than time."

Bzzzzzzz! No Royal Society invitation for you, Andreas. Sounds like Platonism, right? And indeed it is, of a sort, anyway, as Wagner readily acknowledges. But if a universal library of Platonic forms enabled biological evolution to succeed, the materialist premise underlying neo-Darwinism must be wrong. ID skeptic and philosophical materialist Massimo Pigliucci can see where ideas of this coloring might be headed: danger, danger, Will Robinson. (He explains this week in "The Neo-Platonic Argument for Evolution Couldn't Be More Wrong.") Teleology is lurking behind those forms.

Teleology detectors also start buzzing loudly when the ideas of Simon Conway Morris come into view. Over the past 15 years, Conway Morris has contended that the "radical historical contingency" premise of neo-Darwinism -- namely, that the existence of Homo sapiens is the unexpected, unpredictable, or strictly one-off outcome of inherently random events -- is false. Rather, evolution was channeled from the start, and a species very much like Homo sapiens, if not H. sapiens itself, was destined to appear in the universe. You are not an accident of the cosmos.

Bzzzzzzz! Teleology jess don't sit right with us folks. No RS invite for Simon.

Philosophical Materialism and Its Invitation List

All joking aside, no one -- least of all, the troublemakers themselves -- really expects ID troublemakers to be invited to speak at major evolution meetings. If that happened with any regularity, or even occasionally, no one would be reading this site, because the intelligent design debate would be (1) pretty much over, (2) never started in the first place, or (3) entirely different in its nature.

But neither Andreas Wagner nor Simon Conway Morris advocates ID; in fact, both are opposed to the idea. Yet even the subtle teleology of their theories is too strong a flavor for the Royal Society. (Let me say I would love to be wrong about this speculation: it would be delightful to learn that Wagner and Conway Morris were invited by the Royal Society to speak, and couldn't make it, or declined the invitation.) Wagner's and Conway Morris's teleology would have been too strong a flavor for Darwin himself, in fact. "If I were convinced that I required such [teleological] additions to the theory of natural selection," Darwin wrote to Lyell in 1859, "I would reject it as rubbish."


But what if it's true -- namely, that teleology, or genuine purpose, is required to explain living things? Then materialism must give way to evidence. That is a problem that matters. Ultimately, the Royal Society, or anyone who wishes truly to understand the universe, must focus on the problems that matter. The ones that don't will take care of themselves.

Decanonising Darwin

So, You Thought Charles Darwin Discovered Natural Selection? Wrong
Jonathan Witt 

After marshaling evidence against the theory of evolution, skeptics sometimes throw Darwin a bone so as not to seem churlish. Hey, we say in essence, natural selection does accomplish things like spreading antibiotic resistance, and Charles Darwin deserves credit for discovering the principle of natural selection even if it isn't the bauplan-building wunderkind he made it out to be.

Yet this gives Darwin too much credit.

Natural Selection Comes to Edinburgh -- Before Darwin

Long before Darwin (or Alfred Russel Wallace), James Hutton, the father of modern geology, propounded the idea of evolution by natural selection. And at least two other men followed on his heels, doing the same well before Darwin articulated the idea.

A retrospect by Paul N. Pearson in the journal Nature reports:

Following the publication of On the Origin of Species in 1859, Charles Darwin learned (and duly acknowledged) that two previous authors had anticipated the theory of evolution by natural selection. The first account to come to light was by Patrick Matthew, who had briefly outlined the mechanism in an appendix to his 1831 book On Naval Timber and Arboriculture. The second was by the physician William Wells, who had speculated on selection and human evolution in 1818.

But some 50 years ago, E. B. Bailey described a still older version of the selection theory from a 1797 manuscript by the geologist James Hutton -- now chiefly famous for his early appreciation of geological time. Unfortunately, this work, entitled the Elements of Agriculture, never appeared in print. Now a more complete, published account has come to light from 1794.

The account appears in a 1794 tome, An Investigation of the Principles of Knowledge. There Hutton wrote the following:

If an organised body is not in the situation and circumstances best adapted to its sustenance and propagation, then, in conceiving an indefinite variety among the individuals of that species, we must be assured, that, on the one hand, those which depart most from the best adapted constitution, will be most liable to perish, while, on the other hand, those organised bodies, which most approach to the best constitution for the present circumstances, will be best adapted to continue, in preserving themselves and multiplying the individuals of their race.

After quoting the passage, Pearson continues:

For example, Hutton describes that in dogs that relied on "nothing but swiftness of foot and quickness of sight" for survival, "the most defective in respect of those necessary qualities, would be the most subject to perish, and that those who employed them in greatest perfection would be best preserved, consequently, would be those who would remain, to preserve themselves, and to continue the race." But if an acute sense of smell was "more necessary to the sustenance of the animal," then "the natural tendency of the race, acting upon the same principle of seminal variation, would be to change the qualities of the animal, and to produce a race of well scented hounds, instead of those who catch their prey by swiftness." The same "principle of variation" must also influence "every species of plant, whether growing in a forest or a meadow."

One might object that this was buried deep in a long book and mostly forgotten. Perhaps, but Hutton was a major scientific figure, and he disseminated his ideas not just by book but also by lecture and conversation.

And as Pearson also notes, Wells and Matthew -- the other two men known to have articulated the idea of evolution of natural selection before Darwin had -- just so happen to have been "educated in Hutton's home town of Edinburgh, a place famous for its scientific clubs and societies."

So here's the lay of the land: Hutton was born, and died, in Edinburgh, attended the University of Edinburgh, and returned to live in the city as an adult, where he was a member of the Royal Society of Edinburgh and a leading figure in the Scottish Enlightenment. This man, the father of modern geology and a fixture of Edinburgh scientific society, propounds a theory of natural selection, and a generation later two other scientists educated in that same scientific community articulate the same idea in their works. (See a post at Genomicron by T. Ryan Gregory for more on Wells's and Matthew's early musing about natural selection.)

Darwin, keep in mind, was also educated at the University of Edinburgh. So four of the earliest articulations of the idea of natural selection are by Edinburgh men, and Darwin is the last of the four.

Darwin Rediscovers

Pearson generously reconstructs this eyebrow-raising coincidence:

It may be more than coincidence that Wells, Matthew and Darwin were all educated in Hutton's home town of Edinburgh.... Studies of Darwin's private notebooks have shown that he came to the selection principle independently of earlier authors, as he always maintained. But it seems possible that a half-forgotten concept from his student days resurfaced afresh in his mind as he struggled to explain the observations of species and varieties compiled on the voyage of the Beagle.

I suspect that's about right. I have a hard time believing that Darwin knowingly and fiendishly stole Hutton's idea and then, even in his notebooks, carefully pretended he'd never heard it before. And I find it hard to believe that the idea of natural selection was not in the air of Edinburgh when Darwin was there as a student, at least in the scientific community. Hutton was too towering a scientific figure, and the idea of natural selection popping up three times in the written works of three separate Edinburgh-trained scientists following in his wake is surely more than coincidence.

In any case, Darwin didn't discover the idea of natural selection. He either resurfaced the idea on his Beagle voyage and didn't realize he was remembering rather than discovering it; or he came upon the idea independently after three other Edinburgh men had articulated it.

And No, Darwin Doesn't Get Points for Theatrical Overstatement

But wasn't it Darwin who first realized the full power of natural selection? In his Nature article, Paul Pearson is careful to note that Hutton did not extend the idea in the way Darwin had. Hutton thought natural selection's effects were limited to generating different races or breeds within a species.

Also, neither William Wells in 1818, nor Patrick Matthew in 1831, applied the idea of natural selection beyond the species level. So the common defense of Darwin at this stage is to remind readers that it's the Victorian gentleman of Down House who gets the credit for discovering the full power and import of natural selection.

But if natural selection actually cannot produce all the biological variety we see around us, if it can only produce variety within species and just a bit beyond the species level (see Michael Behe's The Edge of Evolution), then Darwin was most original precisely where he was wrong.


Darwin did galvanize attention on the process of natural selection, but he did so in much the way those who started the legend of the Seven Cities of Gold encouraged exploration of the American Southwest. In both cases, a tall tale spurred attention and exploration. And in both cases, some worthwhile things were discovered. But the gold? Not so much.