Search This Blog

Friday, 5 May 2017

An extrapolation revisited.

The Nylonase Story: When Imagination and Facts Collide
Ann Gauger

Editor’s note: Nylon is a modern synthetic product used in the manufacturing, most familiarly, of ladies’ stockings but also a range of other goods, from rope to parachutes to auto tires. Nylonase is a popular evolutionary icon, brandished by theistic evolutionist Dennis Venema among others. In a series of three posts, Discovery Institute biologist Ann Gauger takes a closer look.

A significant problem for the neo-Darwinian story is the origin of new biological information. Clearly, information has increased over the course of life’s history — new life forms appeared, requiring new genes, proteins, and other functional information. The question is — how did it happen? This is the central question concerning the origin of living things.

Stephen Meyer and Douglas Axe have made this strong claim:

[T]he neo-Darwinian mechanism — with its reliance on a random mutational search to generate novel gene sequences — is not an adequate mechanism to produce the information necessary for even a single new protein fold, let alone a novel animal form, in available evolutionary deep time.
Their claim is based on the experimental finding  by Doug Axe that functional protein folds are exceedingly rare, on the order on 1 in 10 to the 77th power, meaning that all the creatures of the Earth searching for the age of the Earth by random mutation could not find even one medium-size protein fold.

In contrast, Dennis Venema, professor of biology at Trinity Western University, claims in his book Adam and the Genome and in posts at the BioLogos website that getting new information is not hard. In his book, he presents several examples he thinks demonstrate the appearance of new information — the apparent evolution of new protein binding sites, for example. But the best way to reveal Axe and Meyer’s folly, he thinks, (and says so in his book and  a post at BioLogos) would be to show that a genuinely “new” protein can evolve.

…[E]ven more convincing… would be an actual example of a functional protein coming into existence from scratch — catching a novel protein forming “in the act” as it were. We know of such an example — the formation of an enzyme that breaks down a man-made chemical.

In the 1970s, scientists made a surprising discovery: a bacterium that can digest nylon, a synthetic chemical not found in nature. These bacteria were living in the wastewater ponds of chemical factories, and they were able to use nylon as their only source of food. Nylon, however, was only about 40 years old at the time — how had these bacteria adapted to this novel chemical in their environment so quickly? Intrigued, the scientists investigated. What they discovered was that the bacteria had an enzyme (which they called “nylonase”) that effectively digested the chemical. This enzyme, interestingly, arose from scratch as an insertion mutation into the coding sequence of another gene. This insertion simultaneously formed a “stop” codon early in the original gene (a codon that tells the ribosome to stop adding amino acids to a protein) and formed a brand new “start” codon in a different reading frame. The new reading frame ran for 392 amino acids before the first “stop” codon, producing a large, novel protein. As in our example above, this new protein was based on different codons due to the frameshift. It was truly “de novo” — a new sequence.
Venema is right. If the nylonase enzyme did evolve from a frameshifted protein, it would genuinely be a demonstration that new proteins are easy to evolve. It would be proof positive that intelligent design advocates are wrong, that it’s not hard to get a new protein from random sequence. But the story bears reexamining. Is the new protein really the product of a frameshift, or did it pre-exist the introduction of nylon into the environment? What exactly do we know about this enzyme? Does the evidence substantiate the claims of Venema and others, or does it lead to other conclusions?

First, some history. In the 1970s Japanese scientists discovered that certain bacteria had developed the ability to degrade the synthetic polymer nylon. Okada et al. identified three enzymes responsible for nylon degradation, and named them EI, EII, and EIII. The genes that encoded them were named nylA, nylB, and nylC. They sequenced the plasmid on which the genes were found, and discovered that there was another gene on the same plasmid that was very similar to nylB; they named it nylB′. (We will focus on the story of nylB and nylB′ because they are the ones relevant to Venema’s story.)

So far all I have given you are the facts. Now here’s the interpretation of these facts. Some claimed that the nylonase enzyme, as it was called, had originated some time after people began making nylon (in the 1930s). That seemed plausible because nylonase was unable to degrade naturally occurring amide bonds — it could degrade only the amide bonds in nylon — and so had not existed previously, it was thought. The popular conclusion was that the nylonase activity evolved in response to the presence of nylon in the environment, and thus was only forty years old. And here’s the big interpretive leap: it must not be hard to get new enzymes if a new one can evolve within a period of forty years.

Okada et al. had sequenced the genes encoding nylB and nylB′. They concluded that the nylonase activity was the result of a gene duplication followed by several mutations to the nylB gene. But at this point Susumu Ohno, an eminent molecular geneticist and evolutionary biologist, noticed something unusual about the nylB gene sequence (Ohno, 1984). Ohno had a theory that DNA with repeats of the right kind had the potential to code for protein in multiple frames, with no interrupting stop codons, and might thus be a source for “new” proteins. (If you are unfamiliar with the terms I just used, I invite you to take a look at my post tomorrow, where I will explain the necessary concepts. For those already familiar, I present some relevant data concerning the rarity of sequences that can be frameshifted.)

Ohno noticed that nylB, the gene for nylonase, might originally have encoded something else if a certain T was removed. The nylonase gene as it exists now has 1179 bases, which encode a 392 amino acid protein. Without a particular T embedded in the ATG start codon, though, the sequence would have specified a hypothetical original gene with a longer open reading frame (ORF) of 427 amino acids, in a different frame. Thus, Ohno proposed a “new” protein with a new function acting on a new substrate was born when a T inserted in between a particular A and G in the DNA, making a new ATG start codon and shifting the frame to code for a new protein, the protein we now call nylonase.

Ingenious. According to Ohno, nylonase could be a new enzyme, appearing suddenly with no known precursors via a sudden frameshift. (Note that all of this assumes that new protein folds are easy to get.) Ohno published this hypothesis in the Proceedings of the National Academy of Sciences. It was a hypothesis only, however, as a careful reading of his paper shows. One heading, for example:

R-IIA Coding Sequence [nylB] for 6-AHA LOH [nylonase] Embodies an Alternative, Longer Open Reading Frame That Might Have Been the Original Coding Sequence [Emphasis added.]
and the text says:

I suggest that the RS-IIA base sequence [nylB] was originally a coding sequence for an arginine-rich polypeptide chain 427 or so residues long in its length and that the coding sequence for one of the two isozymic forms of 6-ALA LOH [nylonase] arose from its alternative open reading frame. [Emphasis added.]
Ohno presented arguments for why his suggestion was plausible, but did not provide evidence that the “original” gene ever existed or was used (in fact he says it was unlikely to be useful based on its amino acid composition), or that the insertion ever happened. Nonetheless, the frame-shift hypothesis for the origin of nylonase has been widely proclaimed as fact (though, notably, not by Okada et al. who have done most of the work).

If the nylonase story as told above were true, namely that a frameshift mutation resulted in the de novo generation of a new protein fold with a new function, it would indeed constitute a substantial refutation to Meyer and Axe’s claim. If a frame-shift mutation can produce a random new open reading frame in real, observable time, and give rise to a new functional enzyme, then it must not be that hard to make new functional protein folds. In other words, functional protein folds must not be rare in sequence space. And therefore Stephen Meyer’s arguments about the difficulty of getting enough new biological information to generate a new fold must be wrong as well. Venema flatly asserts:

If de novo protein-coding genes such as nylonase can come into being from scratch, as it were, then it is demonstrably the case that new protein folds can be formed by evolutionary mechanisms without difficulty….[I]f Meyer had understood de novo gene formation — as we have seen, he mistakenly thought it was an unexplained process — he would have known that new protein folds could indeed be easily developed by evolutionary processes.
Slam dunk, right?

A little caution in accepting this story without hard evidence would be wise. In genetics we are taught that frame-shift mutations are extremely disruptive, completely changing the coding sequence and resulting in truncated nonsense. In fact, one term for a frameshift mutation is “nonsense mutation.” A biologist’s basic intuition should be that frameshifts are highly unlikely to produce something useful. The only reasons for the widespread acceptance of Ohno’s hypothesis that I can come up with are the unusual character of the sequence itself, Ohno reputation as a brilliant scientist (which he was), and wish-fulfillment on the part of some evolutionary biologists.

Fortunately, science marches on, and evidence continues to accumulate. The same group of Japanese scientists continued their study of the nylonase genes. nylB appeared to be the result of a gene duplication of nylB′ that occurred some time ago. EII′ (the enzyme encoded by nylB′) had very little nylonase activity, while EII (the enzyme encoded by nylB) was about 1000 fold higher in activity. The two enzymes differed in amino acid sequence at 47 positions out of 392. With some painstaking work, the Japanese determined that just two mutations were sufficient to convert EII′ to the EII level of activity.

They then obtained the three-dimensional structure of an EII-EII′ hybrid protein. And with those results everything changed — or should have.

Here’s what Venema takes from the paper and interprets the evidence:

…the three-dimensional structure of the protein has been solved using X-ray crystallography, a method that gives us the precise shape of the protein at high resolution. Nylonase is chock full of protein folds— exactly the sort of folds Meyer claims must be the result of design because evolution could not have produced them even with all the time since the origin of life. [Emphasis added.]
Unfortunately, Venema doesn’t have the story straight. Nylonase has a particular fold, a particular three-dimensional, stable shape. Most proteins have a distinct fold — there are several thousand kinds of folds known so far, each with a distinct topology and structure. Folds are typically made up of small secondary structures called alpha helices and beta strands, which help to assemble the tertiary structure — the fold as a whole. Venema seems unclear about what a protein fold is, and the distinction between secondary and tertiary structures. Nylonase is not “chock full of folds.” No structural biologist would describe nylonase as “chock full of protein folds.” Indeed, no protein is “chock full of folds.” Perhaps Venema was referring to the smaller units of secondary structure I mentioned above, the alpha helices or beta strands. But it would appear he doesn’t know what a protein fold is.

Maybe that explains why Venema missed the essential point of the paper describing nylonase’s structure. The crystal structure of EII-EII’ (a nylonase hybrid necessary to be able to crystalize the protein) revealed that it is not a new kind of fold, but a member of the beta-lactamase fold family. More specifically, it resembles carboxylesterases, a subgrouping of that family. In addition, when the scientists checked EII′ and EII, they found that both enzymes had previously undetected carboxylesterase activity. In other words, the EII’ and EII enzymes were carboxylesterases. If it looks like a duck and quacks like a duck, it is a duck.

Thus, EII′ and EII did not have frameshifted new folds. They had pre-existing folds with activity characteristic of their fold type. There was no brand-new protein. No novel protein fold had emerged. And no frameshift mutation was required to produce nylonase.

Where did the nylon-eating ability come from? Carboxylesterases are enzymes with broad substrate specificities; they can carry out a variety of reactions. Their binding pocket is large and can accommodate a lot of different substrates. They are “promiscuous” enzymes, in other words. Furthermore, the carboxylesterase reaction hydrolyzes a chemical bond similar to the one hydrolyzed by nylonase. Tests revealed that both the EII and EII′ enzymes have carboxylesterase and nylonase activity. They can hydrolyze both substrates. In fact it is possible both had carboxylesterase activity and a low level of nylonase activity from the beginning, even before the appearance of nylon.

nylB′ may be the original gene from which nylB came. Apparently there was a gene duplication at some point in the past. The two genes appear to have acquired mutations since then — they differ by 47 amino acids out of 392. The time of that duplication is unknown, but not recent, because it takes time to accumulate that many mutations. However, at least some of those mutations must confer a high level of nylonase activity on EII, the enzyme made by nylB. The enzyme EII’ made by nylB’ has only a low ability to degrade nylon, while EII degrades nylon 1000 fold better. So one or more of those 47 amino acid differences must be the cause of the high level of nylonase activity in EII. Through careful work, the Japanese workers Kato et al. identified which amino acid changes were responsible for the increased nylonase activity. Just two step-wise mutations present in EII, when introduced into EII’, could convert the weak enzyme EII’ to full nylonase activity.

From Kato et al. (1991):

Our studies demonstrated that among the 47 amino acids altered between the EII and EII’ proteins, a single amino acid substitution at position 181 was essential for the activity of 6-aminohexanoate-dimer hydrolase [nylonase] and substitution at position 266 enhanced the effect.
So. This is not the story of a highly improbable frame-shift producing a new functional enzyme. This is the story of a pre-existing enzyme with a low level of promiscuous nylonase activity, which improved its activity toward nylon by first one, then another selectable mutation. In other words this is a completely plausible case of gene duplication, mutation, and selection operating on a pre-existing enzyme to improve a pre-existing low-level activity, exactly the kind of event that Meyer and Axe specifically acknowledge as a possibility, given the time and probabilistic resources available. Indeed, the origin of nylonase actually provides a nice example of the optimization of a pre-existing fold’s function, not the innovation or creation of a novel fold.

As the scientists who carried out the structural determination for nylonase themselves note:

Here, we propose that amino acid replacements in the catalytic cleft of a preexisting esterase with the beta-lactamase fold resulted in the evolution of the nylon oligomer hydrolase. [Emphasis added.]
Let’s put to bed the fable that the nylon oligomer hydrolase EII, colloquially known as nylonase, arose by a frame-shift mutation, leading to the creation of a new functional protein fold. There is absolutely no need to postulate such a highly improbable event, and no justification for making this extravagant claim. Instead, there is a much more parsimonious explanation — that nylonase arose by a gene duplication event some time in the past, followed by a series of two mutations occurring after the introduction of nylon into the environment, which increased the nylon oligomer hydrolase activity of the nylB gene product to current levels. Could this series of events happen in forty years? Most certainly. Probably in much less time. In fact, it has been reported to happen in the lab under the right selective conditions. And most definitely, the evolution of nylonase does not call for the creation of a novel protein fold, nor did one arise. EII’s fold is part of the carboxylesterase fold family. Carboxylesterases serve many functions and have been around much longer than forty years.


Douglas Axe and Stephen Meyer readily admit that this kind of evolutionary adaptation happens easily. A protein that already has a low level of activity for a particular substrate can be mutated to favor that side reaction over its original one, often in just a few steps. There are many cases of this in the literature. What Axe and Meyer do claim is that generating an entirely new protein fold via mutation and selection is implausible in the extreme. Nothing in the nylonase story that Dennis Venema tells shows otherwise.

Why attempting to design the undesignable remains a fools errand.

Why Evolution Simulations Fail: Author of Evolutionary Informatics Book Explains
Evolution News @DiscoveryCSC

f you search for the phrase “evolution simulation” in Google, you’ll get many hits. Come to think of it, computer evolution simulations are an evolutionary icon. What of them? Do they falsify the claims of intelligent design theory?On a new episode of ID the Future, Ray Bohlin takes up the issue with Dr. Winston Ewert, co-author with William Dembski and Robert Marks II of a new book,  An Introduction to Evolutionary Informatics.

Ewert argues that Richard Dawkins’s “Methinks It Is Like a Weasel” simulation doesn’t prove biological evolution and isn’t even very interesting. Ewert says there are some interesting computer evolution simulations, but he explains that they fail to model anything biologically realistic.

Instead they set up a straw man version of intelligent design, and simultaneously sneak teleology in, which kind of defeats the purpose.  Download the podcast here, or listen to it here .

Dr. Ewert’s book is getting raves from some impressive scientists, including star mathematician Gregory Chaitin, author of Proving Darwin: Making Biology Mathematical. He calls the book, “An honest attempt to discuss what few people seem to realize is an important problem.”

Speaking of Chaitin, says Dr. Bijan Nemati of the Jet Propulsion Laboratory and Caltech:

With penetrating brilliance, and with a masterful exercise of pedagogy and wit, the authors take on Chaitin’s challenge, that Darwin’s theory should be subjectable to a mathematical assessment and either pass or fail. Surveying over seven decades of development in algorithmics and information theory, they make a compelling case that it fails.
Congratulations, Dr. Ewert, Dr. Marks, and Dr. Dembski! Get your copy now.

Wednesday, 3 May 2017

Wanted:a theory of devolution.

Crime and Punishment, and Darwin's Theory


Species passed into history seven years ago. In the years that followed 1859, the impact of evolutionary thinking seeped across the culture of Europe and America. For years to come, we'll be tracing a series of century-and-a-half anniversaries of the effects of that seepage, and reflections on it as it was happening. This year, among other things, it's the publication of Dostoyevsky's Crime and Punishment (1866). 
In The New Criterion, Gary Saul Morson writes on "The disease of theory: 'Crime & Punishment' at 150." By "disease of theory" he means something recognizable from our contemporary culture:
The decade after [Tsar Alexander II] ascended the throne witnessed the birth of the "intelligentsia," a word we get from Russian, where it meant not well-educated people but a group sharing a set of radical beliefs, including atheism, materialism, revolutionism, and some form of socialismIntelligents (members of the intelligentsia) were expected to identify not as members of a profession or social class but with each other. They expressed disdain for everyday virtues and placed their faith entirely in one or another theory. Lenin, Trotsky, and Stalin were typical intelligents....
The intelligentsia prided itself on ideas discrediting all traditional morality. Utilitarianism suggested that people do, and should do, nothing but maximize pleasure. Darwin's Origin of Species, which took Russia by storm, seemed to reduce people to biological specimens. In 1862 the Russian neurologist Ivan Sechenov published his Reflexes of the Brain, which argued that all so-called free choice is merely "reflex movements in the strict sense of the word." And it was common to quote the physiologist Jacob Moleschott's remark that the mind secretes thought the way the liver secretes bile. These ideas all seemed to converge on revolutionary violence.
The hero of Crime and Punishment, Rodion Raskolnikov, discusses disturbances then in progress, including the radicals' revolutionary proclamations and a series of fires they may have set. But by nature he is no bloodthirsty killer. Quite the contrary, he has an immensely soft heart and is tortured by the sight of human suffering, which he cannot and refuses to get used to. "Man gets used to everything, the scoundrel!" he mutters, but then immediately embraces the opposite position: "And what if I'm wrong . . . what if man is not really a scoundrel . . . then all the rest is prejudice, simply artificial terrors and there are no barriers and it's all as it should be."...He means that man cannot be a "scoundrel" because that is a moral category, and morality is simply "artificial terrors" imposed by religion and sheer "prejudice." There is only nature, and nature has causes, not moral purposes. It follows that all is as it should be because if moral concepts are illusions then things just are what they are. [Emphasis added.]
More:
The questions this masterpiece poses still haunt us, perhaps even more than when it first appeared. Revolution still attracts. "New atheists" and stale materialists advance arguments that were crude a hundred fifty years ago. Social scientists describe human decisions in absurdly simplistic terms. Our intelligentsia entertains theory after theory elevating them above the ordinary people they would control. Morality is explained away neurologically, sociobiologically, or as mere social convention.
My goodness, since Dostoyevsky documented the toxin of "theories," how little has changed. 
Except that 150 years ago there were still abundant great men in defense of the view opposite to materialism, while our own contemporaries, even the ones with their heart in the right place, seem increasingly diminutive in stature. The difference made in just a couple of decades -- a mere generation, the passage from father to son -- is remarkable. Unthinking surrender to the most prestigious theories, or evading serious confrontation with them, is now the order of the day. What we need is a theory not of evolution but of devolution.

Tuesday, 2 May 2017

Insect navigator from down under Vs. Darwinism

A Monarch-Like Wonder from Mountains Down Under
Evolution News & Views

There's a little gray moth in Australia that does something extraordinary. Like the Monarch butterfly of North America, it migrates over long distances. Unlike the Monarch, it flies at night. And it doesn't even need to.

Current Biology describes this dull-colored little wonder, called the Bogong moth, as the "nocturnal counterpart of the migratory Monarch butterfly." Its summer home is as amazing as the mountain forests of Mexico where the Monarchs were discovered.

If you ever have the chance of hiking the Australian Alps in summer, you will find an ancient and beautiful mountain range. The grassy, treeless peaks, polished aeons ago by glaciers, are littered with countless granite boulders of all shapes and sizes. If you are not claustrophobic and dare to climb into one of the crevices formed by these rocky ensembles, your breath will be taken away, first by the dense clouds of ultra-fine, silvery dust drawn to your face by swift air currents channelled through the rock chimneys, and then by the sight of the source of the dust: hundreds of thousands of Bogong moths, neatly tiling the cave walls. In fact, there are about 17,000 of them per square meter, but you will only find them by chance if you are very lucky. This is because we only know of a handful of such caves, and the moths are present there only for four months during the height of the Australian summer. [Emphasis added.]

These moths were a source of food for aboriginal people, who found them in the mountain plains each summer. It took scientists more recent study to discover the rest of their "remarkable and interesting" tale: that they migrate a thousand kilometers from southern Queensland to these mountain caves each year. Here's how they outperform the Monarchs as navigators:

All this makes the Bogong moth, in many respects, similar to the iconic North American Monarch butterfly Danaus plexippus, except that it is a night-active species and therefore cannot use the sun for orientation. And unlike the Monarch butterfly, where the full forward and reverse migrations are performed by several generations, individual Bogong moths perform both migrations. If you think of the Monarch butterfly as the King of insect migration, the Bogong moth is certainly insect migration's Dark Lord.

Scientists don't know how they find their way without sunlight. Monarchs are known to use the polarization of light as it changes throughout the day; in fact, our neighbors at the University of Washington believe they have figured out the secret of the Monarchs' internal compass at long last. But Bogongs only have the moon, the stars and the earth's magnetic field to provide visual cues. While these might guide them in the basic direction, what leads them specifically to the caves?

The Bogong moth's journey can thus be divided into a long-distance part and a final travel segment that lets them locate their specific target site. As the two parts operate on very different spatial scales, the mechanisms employed, and the information used, are likely not identical. To find their caves, Bogong moths might, for example, use their sense of smell and be attracted to the carcasses of those family members that were not fit enough for last year's return trip.

Let's see if authors Stanley Heinze and Eric Warrant can provide a Darwinian explanation. "Given the lengthy, difficult, and often lethal journey, there must be substantial selection pressure driving these animals along their migratory cycle," evolutionary theory would expect. "Nevertheless, and again similar to the Monarch butterfly, not all populations of Bogong moths are migratory." In fact, they say, there are non-migrating populations of Bogongs at both ends of the route and in other places. This defies evolutionary expectations so clearly, the authors never return to the question of what selection pressures might possibly create this remarkable behavior. They only mention additional examples of insects with mixed populations of migrants and non-migrants, confessing that "the migratory movements of these species are either erratic or poorly understood."

The performance of the little night-flying Bogong moth is enough, by contrast, to generate rhapsodic praise:

Bogong moths pinpoint a tiny mountain cave from over a thousand kilometres away, crossing terrain they have never crossed previously, and locating a place they have never been to before. Moreover, they do all this at night, fuelled by a few drops of nectar and using a brain the size of a grain of rice. Don't even ask an engineer if they could build a robot equivalent! To achieve this remarkable behaviour, the moth brain has to integrate sensory information from multiple sources and compute its current heading relative to an internal compass. It then has to compare that heading to its desired migratory direction and translate any mismatch into compensatory steering commands, while maintaining stable flight in very dim light while buffeted by cold turbulent winds.

On top of all that, the moth has to switch all its computations to the opposite direction come autumn, and reverse all its learned behaviors. "Its simple nervous system and its fixed, reproducible behaviour stand in stark contrast to the complexity of the problem that the Bogong moth must solve."

Here's where intelligent design can make a contribution. Because science can employ "electrophysiology, neuroanatomy, and behavioural analysis" to study this stable population, we can bestow upon these insects a better reputation than accidental products of bind selection pressures. We can, instead, reverse engineer the software that takes neural circuits that underlie "nocturnal vision, sensory integration, motor control, action selection and state-dependent changes of behaviour." As a result of design thinking, we might even be able to apply the knowledge gained to our own designed systems.

Seeing in the Dark

A related paper in Current Biology examines the question of how moths see to fly at night. Here's the upshot:

A new study shows that moth vision trades speed and resolution for contrast sensitivity at night. These remarkable neural adaptations take place in the higher-order neurons of the hawkmoth motion vision pathway and allow the insects to see during night flights.

Author Petri Ala-Laurila waxes eloquent about the difficulty of operating in the dark.

Seeing under very dim light poses a formidable challenge for the visual system. In these conditions, visual signals originating in a small number of photoreceptor cells have to be detected against neural noise originating in a much larger number of such cells, as well as in the neural circuitry processing these sparse signals. The randomness of rare photon arrivals makes it even harder to form reliable visual percepts in dim light. Yet many species show remarkable visual capabilities at extremely low light levels.

We are on that list; "dark-adapted humans can detect just a few light quanta absorbed on a small region of the peripheral retina." Nevertheless, hawkmoths are experts at deriving the most from the least, along with cockroaches, dung beetles, toads and Central American sweat bees. "In all of these cases, the striking behavioral performance of animals in dim light exceeds that of individual receptor cells at their visual inputs by orders of magnitude."

The secret, the author explains, is in the processing. Take what you have, pool it, and boost it. Summing the inputs adds clarity over time. "In our own retina, rod photoreceptors used mainly at low light levels have a longer integration time than cone photoreceptors that we use in daytime," Ala-Laurila says. "This is one example of receptor-level temporal summation."

There are tradeoffs, however; "Unfortunately, there is no free lunch -- especially not in biology." Pooling and boosting adds noise, lowers resolution, and takes longer to compute. Imagine a moth flying in the dark, darting rapidly to avoid predators. You would think it needs a high-speed visual computer to do what it does. "Balancing sensitivity against acuity and speed is a trade-off problem where the optimal solution depends on light level and motion velocity." Remember back when we talked about optimization as an example of intelligent design science in action?

Ala-Laurila points to a study that quantified the amount of summation going on in the hawkmoth's brain and eyes. How the scientists did that is quite a trick, but they found out that the summation circuitry gives the moth a hundredfold boost in sensitivity, using nonlinear processing. A dim image, therefore, becomes quite bright as the scientists show in a comparison between original and processed images.

Another study we discussed last year showed that the moth's behavior is perfectly tuned to the motions of the flowers they seek at night for nectar. "These two studies together suggest that the neural mechanisms of the moth visual system have been matched perfectly to the requirements of its environment." What luck for Darwinian selection to get mutations in both systems to match up perfectly! Evolution is beautiful.

Similarly, it will be intriguing to understand the mechanisms that control the optimal tuning of spatial and temporal properties across multiple light levels in the moth. Recent studies have unraveled neural circuit mechanisms underlying luminance-dependent changes in the spatial summation of the vertebrate retina. Further mechanistic understanding of evolution as an innovator at visual threshold might even help us to build more sensitive and efficient night vision devices in the future. Aside from these potential future innovations, this study reveals above all some of the key neural secrets underlying the night flight of a moth in the wilderness. This understanding as such is simply beautiful.

Evolutionists at the University of Basel are even claiming that Darwinian evolution is helping moths adapt to city life by making them avoid bright lights. One can always invent a story about how blind processes achieve perfection, but returning to reality, we know design when we see it. Whether the Monarchs shown in Illustra Media's documentary Metamorphosis: The Beauty and Design of Butterflies or the night navigators described here (Bogong moth and hawkmoth), we just need to recognize what it points to.


There is "no free lunch -- especially not in biology." Aimless natural processes are woefully inadequate to deliver precision guided systems. Intelligence, by contrast, provides a feast for understanding.

Why slain myths become undead rather than stay buried.

Who Will Debunk The Debunkers?
By Daniel Engber


In 2012, network scientist and data theorist Samuel Arbesman published a disturbing thesis: What we think of as established knowledge decays over time. According to his book “The Half-Life of Facts,” certain kinds of propositions that may seem bulletproof today will be forgotten by next Tuesday; one’s reality can end up out of date. Take, for example, the story of Popeye and his spinach.

Popeye loved his leafy greens and used them to obtain his super strength, Arbesman’s book explained, because the cartoon’s creators knew that spinach has a lot of iron. Indeed, the character would be a major evangelist for spinach in the 1930s, and it’s said he helped increase the green’s consumption in the U.S. by one-third. But this “fact” about the iron content of spinach was already on the verge of being obsolete, Arbesman said: In 1937, scientists realized that the original measurement of the iron in 100 grams of spinach — 35 milligrams — was off by a factor of 10. That’s because a German chemist named Erich von Wolff had misplaced a decimal point in his notebook back in 1870, and the goof persisted in the literature for more than half a century.

By the time nutritionists caught up with this mistake, the damage had been done. The spinach-iron myth stuck around in spite of new and better knowledge, wrote Arbesman, because “it’s a lot easier to spread the first thing you find, or the fact that sounds correct, than to delve deeply into the literature in search of the correct fact.”

Arbesman was not the first to tell the cautionary tale of the missing decimal point. The same parable of sloppy science, and its dire implications, appeared in a book called “Follies and Fallacies in Medicine,” a classic work of evidence-based skepticism first published in 1989.1 It also appeared in a volume of “Magnificent Mistakes in Mathematics,” a guide to “The Practice of Statistics in the Life Sciences” and an article in an academic journal called “The Consequence of Errors.” And that’s just to name a few.

All these tellings and retellings miss one important fact: The story of the spinach myth is itself apocryphal. It’s true that spinach isn’t really all that useful as a source of iron, and it’s true that people used to think it was. But all the rest is false: No one moved a decimal point in 1870; no mistake in data entry spurred Popeye to devote himself to spinach; no misguided rules of eating were implanted by the sailor strip. The story of the decimal point manages to recapitulate the very error that it means to highlight: a fake fact, but repeated so often (and with such sanctimony) that it takes on the sheen of truth.

In that sense, the story of the lost decimal point represents a special type of viral anecdote or urban legend, one that finds its willing hosts among the doubters, not the credulous. It’s a rumor passed around by skeptics — a myth about myth-busting. Like other Russian dolls of distorted facts, it shows us that, sometimes, the harder that we try to be clear-headed, the deeper we are drawn into the fog.


No one knows this lesson better than Mike Sutton. He must be the world’s leading meta-skeptic: a 56-year-old master sleuth who first identified the myth about the spinach myth in 2010 and has since been working to debunk what he sees as other false debunkings. Sutton, a criminology professor at Nottingham Trent University, started his career of doubting very young: He remembers being told when he was still a boy that all his favorite rock stars on BBC’s “Top of the Pops” were lip-synching and that some weren’t even playing their guitars. Soon he began to wonder at the depths of this deception. Could the members of Led Zeppelin be in on this conspiracy? Was Jimmy Page a lie? Since then, Sutton told me via email, “I have always been concerned with establishing the veracity of what is presented as true, and what is something else.”

As a law student, Sutton was drawn to stories like that of Popeye and the inflated iron count in spinach, which to him demonstrated both the perils of “accepted knowledge” and the importance of maintaining data quality. He was so enamored of the story, in fact, that he meant to put it in an academic paper. But in digging for the story’s source, he began to wonder if it was true. “It drew me in like a problem-solving ferret to a rabbit hole,” he said.

Soon he’d gone through every single Popeye strip ever drawn by its creator, E.C. Segar, and found that certain aspects of the classic story were clearly false. Popeye first ate spinach for his super power in 1931, Sutton found, and in the summer of 1932 the strip offered this iron-free explanation: “Spinach is full of vitamin ‘A,’” Popeye said, “an’ tha’s what makes hoomans strong an’ helty.” Sutton also gathered data on spinach production from the U.S. Department of Agriculture and learned that it was on the rise before Segar’s sailor-man ever starting eating it.

It seems plausible that the tellers of these tales are getting blinkered by their own feelings of superiority — that the mere act of busting myths makes them more susceptible to spreading them.
What about the fabled decimal point? According to Sutton’s research, a German chemist did overestimate the quantity of iron in spinach, but the mistake arose from faulty methods, not from poor transcription of the data.2 By the 1890s, a different German researcher had concluded that the earlier estimate was many times too high. Subsequent analyses arrived at something closer to the correct, still substantial value — now estimated to be 2.71 milligrams of iron per 100 grams of raw spinach, according to the USDA. By chance, the new figure was indeed about one-tenth of the original, but the difference stemmed not from misplaced punctuation but from the switch to better methodology. In any case, it wasn’t long before Columbia University analytical chemist Henry Clapp Sherman laid out the problems with the original result. By the 1930s, Sutton argues, researchers knew the true amount of iron in spinach, but they also understood that not all of it could be absorbed by the human body.3

The decimal-point story only came about much later. According to Sutton’s research, it seems to have been invented by the nutritionist and self-styled myth-buster Arnold Bender, who floated the idea with some uncertainty in a 1972 lecture. Then in 1981, a doctor named Terence Hamblin wrote up a version of the story without citation for a whimsical, holiday-time column in the British Medical Journal. The Hamblin article, unscholarly and unsourced, would become the ultimate authority for all the citations that followed. (Hamblin graciously acknowledged his mistake after Sutton published his research, as did Arbesman.)

In 2014, a Norwegian anthropologist named Ole Bjorn Rekdal published an examination of how the decimal-point myth had propagated through the academic literature. He found that bad citations were the vector. Instead of looking for its source, those who told the story merely plagiarized a solid-sounding reference: “(Hamblin, BMJ, 1981).” Or they cited someone in between — someone who, in turn, had cited Hamblin. This loose behavior, Rekdal wrote, made the transposed decimal point into something like an “academic urban legend,” its nested sourcing more or less equivalent to the familiar “friend of a friend” of schoolyard mythology.

Emerging from the rabbit hole, Sutton began to puzzle over what he’d found. This wasn’t just any sort of myth, he decided, but something he would term a “supermyth”: A story concocted by respected scholars and then credulously disseminated in order to promote skeptical thinking and “to help us overcome our tendency towards credulous bias.” The convolution of this scenario inspired him to look for more examples. “I’m rather a sucker for such complexity,” he told me.


Complicated and ironic tales of poor citation “help draw attention to a deadly serious, but somewhat boring topic,” Rekdal told me. They’re grabby, and they’re entertaining. But I suspect they’re more than merely that: Perhaps the ironies themselves can help explain the propagation of the errors.

It seems plausible to me, at least, that the tellers of these tales are getting blinkered by their own feelings of superiority — that the mere act of busting myths makes them more susceptible to spreading them. It lowers their defenses, in the same way that the act of remembering sometimes seems to make us more likely to forget. Could it be that the more credulous we become, the more convinced we are of our own debunker bona fides? Does skepticism self-destruct?


Sutton told me over email that he, too, worries that contrarianism can run amok, citing conspiracy theorists and anti-vaxxers as examples of those who “refuse to accept the weight of argument” and suffer the result. He also noted the “paradox” by which a skeptic’s obsessive devotion to his research — and to proving others wrong — can “take a great personal toll.” A person can get lost, he suggested, in the subterranean “Wonderland of myths and fallacies.”

In the last few years, Sutton has himself embarked on another journey to the depths, this one far more treacherous than the ones he’s made before. The stakes were low when he was hunting something trivial, the supermyth of Popeye’s spinach; now Sutton has been digging in more sacred ground: the legacy of the great scientific hero and champion of the skeptics, Charles Darwin. In 2014, after spending a year working 18-hour days, seven days a week, Sutton published his most extensive work to date, a 600-page broadside on a cherished story of discovery. He called it “Nullius in Verba: Darwin’s Greatest Secret.”

Sutton’s allegations are explosive. He claims to have found irrefutable proof that neither Darwin nor Alfred Russel Wallace deserves the credit for the theory of natural selection, but rather that they stole the idea — consciously or not — from a wealthy Scotsman and forest-management expert named Patrick Matthew. “I think both Darwin and Wallace were at the very least sloppy,” he told me. Elsewhere he’s been somewhat less diplomatic: “In my opinion Charles Darwin committed the greatest known science fraud in history by plagiarizing Matthew’s” hypothesis, he told the Telegraph. “Let’s face the painful facts,” Sutton also wrote. “Darwin was a liar. Plain and simple.”

Some context: The Patrick Matthew story isn’t new. Matthew produced a volume in the early 1830s, “On Naval Timber and Arboriculture,” that indeed contained an outline of the famous theory in a slim appendix. In a contemporary review, the noted naturalist John Loudon seemed ill-prepared to accept the forward-thinking theory. He called it a “puzzling” account of the “origin of species and varieties” that may or may not be original. In 1860, several months after publication of “On the Origin of Species,” Matthew would surface to complain that Darwin — now quite famous for what was described as a discovery born of “20 years’ investigation and reflection” — had stolen his ideas.

Darwin, in reply, conceded that “Mr. Matthew has anticipated by many years the explanation which I have offered of the origin of species, under the name of natural selection.” But then he added, “I think that no one will feel surprised that neither I, nor apparently any other naturalist, had heard of Mr. Matthew’s views.”

That statement, suggesting that Matthew’s theory was ignored — and hinting that its importance may not even have been quite understood by Matthew himself — has gone unchallenged, Sutton says. It has, in fact, become a supermyth, cited to explain that even big ideas amount to nothing when they aren’t framed by proper genius.

Sutton thinks that story has it wrong, that natural selection wasn’t an idea in need of a “great man” to propagate it. After all his months of research, Sutton says he found clear evidence that Matthew’s work did not go unread. No fewer than seven naturalists cited the book, including three in what Sutton calls Darwin’s “inner circle.” He also claims to have discovered particular turns of phrase — “Matthewisms” — that recur suspiciously in Darwin’s writing.

In light of these discoveries, Sutton considers the case all but closed. He’s challenged Darwin scholars to debates, picked fights with famous skeptics such as Michael Shermer and Richard Dawkins, and even written letters to the Royal Society, demanding that Matthew be given priority over Darwin.

But if his paper on the spinach myth convinced everyone who read it — even winning an apology from Terence Hamblin, one of the myth’s major sources — the work on Darwin barely registered. Many scholars ignored it altogether. A few, such as Michael Weale of King’s College, simply found it unconvincing. Weale, who has written his own book on Patrick Matthew, argued that Sutton’s evidence was somewhat weak and circumstantial. “There is no ‘smoking gun’ here,” he wrote, pointing out that at one point even Matthew admitted that he’d done little to spread his theory of natural selection. “For more than thirty years,” Matthew wrote in 1862, he “never, either by the press or in private conversation, alluded to the original ideas … knowing that the age was not suited for such.”


When Sutton is faced with the implication that he’s taken his debunking too far — that he’s tipped from skepticism to crankery — he lashes out. “The findings are so enormous that people refuse to take them in,” he told me via email. “The enormity of what has, in actual fact, been newly discovered is too great for people to comprehend. Too big to face. Too great to care to come to terms with — so surely it can’t be true. Only, it’s not a dream. It is true.” In effect, he suggested, he’s been confronted with a classic version of the “Semmelweis reflex,” whereby dangerous, new ideas are rejected out of hand.

Could Sutton be a modern-day version of Ignaz Semmelweis, the Hungarian physician who noticed in the 1840s that doctors were themselves the source of childbed fever in his hospital’s obstetric ward? Semmelweis had reduced disease mortality by a factor of 10 — a fully displaced decimal point — simply by having doctors wash their hands in a solution of chlorinated lime. But according to the famous tale, his innovations were too radical for the time. Ignored and ridiculed for his outlandish thinking, Semmelweis eventually went insane and died in an asylum. Arbesman, author of “The Half-Life of Facts,” has written about the moral of this story too. “Even if we are confronted with facts that should cause us to update our understanding of the way the world works,” he wrote, “we often neglect to do so.”

Of course, there’s always one more twist: Sutton doesn’t believe this story about Semmelweis. That’s another myth, he says — another tall tale, favored by academics, that ironically demonstrates the very point that it pretends to make. Citing the work of Sherwin Nuland, Sutton argues that Semmelweis didn’t go mad from being ostracized, and further that other physicians had already recommended hand-washing in chlorinated lime. The myth of Semmelweis, says Sutton, may have originated in the late 19th century, when a “massive nationally funded Hungarian public relations machine” placed biased articles into the scientific literature. Semmelweis scholar Kay Codell Carter concurs, at least insofar as Semmelweis was not, in fact, ignored by the medical establishment: From 1863 through 1883, he was cited dozens of times, Carter writes, “more frequently than almost anyone else.”

Yet despite all this complicating evidence, scholars still tell the simple version of the Semmelweis story and use it as an example of how other people — never them, of course — tend to reject information that conflicts with their beliefs. That is to say, the scholars reject conflicting information about Semmelweis, evincing the Semmelweis reflex, even as they tell the story of that reflex. It’s a classic supermyth!

And so it goes, a whirligig of irony spinning around and around, down into the depths. Is there any way to escape this endless, maddening recursion? How might a skeptic keep his sanity? I had to know what Sutton thought. “I think the solution is to stay out of rabbit holes,” he told me. Then he added, “Which is not particularly helpful advice.”

Footnotes

Its authors cite the story of the misplaced decimal point as an example of the “Bellman’s Fallacy” — a reference to a character from Lewis Carroll who says, “What I tell you three times is true.” Such mistakes, they wrote, illustrate “the ways in which truth may be obscured, twisted, or mangled beyond recognition, without any overt intention to do it harm.” ^
Another scholar with an interest in the spinach tale has found that in Germany, at least, the link between spinach and iron was being cited as conventional wisdom as early as 1853. This confusion may have been compounded by research that elided differences between dried and fresh spinach, Sutton says. ^
It’s long been suggested that high levels of oxalic acid — which are present in spinach — might serve to block absorption of iron, as they do for calcium, magnesium and zinc. Other studies find that oxalic acid has no effect on iron in the diet, though, and hint that some other chemical in spinach might be getting in the way. ^

How universal is language of life?

Reply To Kenneth Miller On The Genetic Code
Discovery Institute's Center for Science & Culture
Discovery Institute


On Tuesday, September 25, 2001, Professor Kenneth Miller of Brown University issued a press release entitled "A 'Dying Theory' Fails Again," available here: 

http://www.ncseweb.org/resources/articles/3071_km-3.pdf

In this document, Miller claims that the Discovery Institute (DI) tried to "smear" PBS's Evolution series when the DI charged that program with making a false statement about the universality of the genetic code. Miller also claims that the DI failed to tell the public that "the very discoveries they cite provide elegant and unexpected support for Darwin's theories."

These claims are false. Miller's press release, however, provides an excellent teaching opportunity for the DI, not only to show why Miller's claims are false, but also to amplify our original objection. We shall explain why statements such as "the genetic code is universal" not only harm science -- by creating what Charles Darwin called "false facts" -- but also cheat the public, by concealing the real puzzles facing evolutionary theory. We conclude by touching on some of the deeper issues raised by patterns of evidence such as the genetic code.

We begin with the errors and misrepresentations in Miller's press release.

Miller completely misrepresents the significance of a diagram reproduced in his press release from another source (Knight et al. 2001, Figure 2). This is a serious mistake, as Miller rests his case against the DI on his misunderstanding of this diagram.

Miller equates genetic code variants to minor differences in dialects of the same spoken language (e.g., English). This comparison is erroneous and misleading.

Miller claims that the successes of biotechnology prove the universality of the code. This is untrue, and ignores the literature on experiments employing organisms with variant codes.

Let's consider each problem in more detail:

1. Miller completely misrepresents Knight et al.'s composite phylogeny of genetic codes.

In his press release, Miller writes:

"Look closely at the figure from this paper, and you;ll see something remarkable. The variations from the standard code occur in regular patterns that can be traced directly back to the standard code, which sits at the center of the diagram."

This is false. The variant codes do not "occur in regular patterns," but appear independently in unrelated lineages. Knight et al. explain this pattern of convergent (i.e., non-homologous) appearance in the article itself:

"The genetic code varies in a wide range of organisms (FIG. 2 [reproduced in Miller's press release], some of which share no obvious similarities. Sometimes the same change recurs in different lineages: for instance, the UAA and UAG codons have been reassigned from Stop to Gln in some diplomonads, in several species of ciliates and in the green alga Acetabularia acetabulum (reviewed in Ref. 5). Similarly, animal and yeast mitochondria have independently reassigned AUA from Ile to Met." [1] 

In their caption to Figure 2, Knight et al. note explicitly that variant codes have arisen "repeatedly and independently in different taxa." This pattern of convergent variation has generated much discussion in the primary literature. [2] If these are indeed convergent changes, they do not provide evidence of common descent at all, but rather would be misleading similarities that, taken by themselves, generate a false history of the organisms in question.

In short, Miller completely misrepresents the Knight et al. composite phylogeny. There is no "regular pattern" to the variant codes that maps congruently onto phylogenetic trees from other data. Thus, far from providing what Miller calls "unexpected confirmation of the evolution of the code from a single common ancestor," the pattern of variant codes represents a puzzle for a single tree of life. 

2. Variant genetic codes are not analogous to the differences between dialects of the same language.

In his press release, Miller writes:

"As evolutionary biologists were quick to realize, slight differences in the genetic code are similar to differences between the dialects of a single spoken language. The differences in spelling and word meanings between the American, Canadian, and British dialects of English reflect a common origin. Exactly the same is true for the universal language of DNA."

This is--at best--a wildly inaccurate analogy. From context and other clues, English speakers can discern that the words "center" and "centre," or "color" and "colour," refer to the same object. Meaning is preserved by context, and the reader moves along without a hitch.

But a gene sequence from a ciliated protozoan such as Tetrahymena (for instance), with the codons UAA and UAG in its open reading frame (ORF), cannot be interpreted correctly by the translation machinery of other eukaryotes having the so-called "universal" code. In Tetrahymena, UAA and UAG code for glutamine. In the universal code, these are stop codons. Thus the translation machinery of most other eukaryotes, when reading the Tetrahymena gene, would stop at UAA or UAG. Instead of inserting glutamine into the growing polypeptide chain, and continuing to translate the mRNA, release factors would bind to the codons, and the ribosomes would halt protein synthesis. The resulting protein would be truncated in length and very possibly non-functional. Unlike variant spellings of "center," therefore, context cannot preserve meaning. With the codons UAA and UAG (comparing Tetraphymena thermophila and other eukaryotes) no shared context exists.

Knight et al. present a much better analogy for code changes:

"Any change in the genetic code alters the meaning of a codon, which, analogous to reassigning a key on a keyboard, would introduce errors into every translated message." [3]

Indeed, for two decades (see below), it was exactly this deeply-embedded feature of the genetic code that led to strong predictions about its necessary universality across all organisms. It was widely thought that any change to the genetic code of an organism would affect all the proteins produced by that organism, leading to deleterious consequences (e.g., truncated or misfolded proteins) or lethality. Once the code evolved in the progenitor of all life, it "froze," and all subsequent organisms would carry that code.

In any case, the differences between genetic codes are not properly analogous to minor differences among dialects of a single language. 

3. Miller's references to biotechnology do not accurately represent the experimental literature on variant genetic codes.

In his press release, Miller writes:

"...the entire biotechnology industry is built upon the universality of the genetic code. Genetically-modified organisms are routinely created in the lab by swapping genes between bacteria, plants, animals, and viruses. If the coded instructions in those genes were truly as different as the critics of evolution would have you believe, none of these manipulations would work."

But some manipulations--namely, those involving organisms with variant codes--do not work, unless the researchers themselves intervene to ensure function. 

Consider, for instance, the release factor from the ciliate Tetrahymena thermophila. Release factors (in eukaryotes, these proteins are abbreviated as "eRF" to distinguish them from prokaryotic release factors) catalyze the separation of completed polypeptide chains (nascent proteins) from the ribosomal machinery. Unlike other eukaryotic release factors, however, that recognize all three stop codons (UAA, UGA, and UAG), the Tetrahymena thermophila release factor recognizes only the UGA codon as "stop."

In 1999, Andrew Karamyshev and colleagues at the University of Tokyo isolated the release factor (Tt-eRF1) from Tetrahymena thermophila. But in order to express and purify the protein, Karamyshev et al. had to manipulate it genetically first. Why? The Tetrahymena thermophila gene for Tt-eRF1 contains 10 codons in its open reading frame that would be interpreted as "stop" by other organisms--whereas Tetrahymena thermophila reads these codons as glutamine:

"To express and purify the recombinant Tt-eRF1 protein under heterologous expression conditions [i.e., in a cell other than Tetrahymena--Karamyshev et al. used yeast cells], 10 UAA/UAG triplets within the coding sequence were changed to the glutamine codon CAA or CAG by site-directed mutagenesis." [4]

Furthermore, Tt-eRF1 would not function when employed in combination with ribosomes (translation machinery) from other species:

"In spite of the overall conservative protein structure of Tt-eRF1 compared with mammalian and yeast eRF1s, the soluble recombinant Tt-eRF1 did not show any polypeptide release activity in vitro using rat or Artemia ribosomes." [5] Thus, when using an organism with a variant code (Tetrahymena thermophila), researchers found that

They needed to modify (i.e., intelligently manipulate) the gene sequences so that they could be expressed by other organisms, and

They discovered that a key component of the genetic code (namely, the release factor that terminates translation) would not function properly with the translation machinery of other organisms.

Experiments to change the identity of transfer RNA (tRNA)--another possible mechanism by which genetic codes might reassign codon "meanings"--have shown that the intermediate steps must be bridged by intelligent (directed) manipulation. In one such experiment, for instance, Margaret Saks, John Abelson, and colleagues at Caltech changed an E. coli arginine tRNA to specify a different amino acid, threonine. They accomplished this, however, only by supplying the bacterial cells (via a plasmid) with another copy of the wild-type threonine tRNA gene. This intelligently-directed intervention bridged the critical transition stage during which the arginine tRNA was being modified by mutations to specify threonine. [6] Indeed, in reporting on an earlier experiment to modify tRNA, Abelson and colleagues noted that "if multiple changes are required to alter the specificity of a tRNA, they cannot be selected but they can be constructed" [7]--constructed, that is, by intelligent design. We stress here that, in contrast to Miller's blithe dismissal of the difficulties raised for biotechnology by variant genetic codes, experts in the field caution that assuming a "universal" code may lead to serious problems. In a recent article on the topic entitled "Codon reassignment and the evolving genetic code: problems and pitfalls in post-genome analysis," Justin O'Sullivan and colleagues at the University of Kent observe:

"The emerging non-universal nature of the genetic code, coupled with the fact that few genetic codes have been experimentally confirmed, has several serious implications for the post-genome era. The production of biologically active recombinant molecules requires that careful consideration be given to both the expression system and the original host genome. The substitution of amino acids within a protein encoded by a nonstandard genetic code could alter the structure, function or antibody recognition of the final product." [8]

Thus, Miller's statements on biotechnology are highly misleading. Variant codes are not a minor matter easily overcome in experiments using different organisms.

We conclude by considering some of the deeper issues raised by Miller's press release. 

A little history and some basic logic

Not so very long ago, the universality of the genetic code was widely regarded as an important prediction (or confirmation) of the theory of common descent. Consider, for instance, an evolutionary biology textbook by the zoologist Mark Ridley, entitled The Problems of Evolution (Oxford University Press, 1985). In his first chapter, "Is Evolution True?" Ridley argues that common descent predicts a universal genetic code. His formulation of this argument mirrors dozens of similar arguments present in the biological literature from the mid-1960s to the mid-1980s:

"The outstanding example of a universal homology is the genetic code...The universality of the code is easy to understand if every species is descended from a common ancestor. Whatever code was used by the common ancestor would, through evolution, be retained. It would be retained because any change in it would be disastrous. A single change would cause all the proteins of the body, perfected over millions of years, to be built wrongly; no such body could live. It would be like trying to communicate, but having swapped letters around in words; if you change every 'a' for an 'x', for example, and tried talking to people, they would not make much sense of it. Thus we expect the genetic code to be universal if all species have descended from a common ancestor." [9]

Shortly after Ridley's argument was published in The Problems of Evolution, the evolutionary biologist Brian Charlesworth reviewed the book. He cautioned that Ridley was "less sound on the more modern aspects" of evolution, including the genetic code. Ridley's genetic code argument, Charlesworth worried,

"provides an opening for the creationists by asserting that the genetic code is universal, whereas it is now known that slight deviations from the standard code occur in mitochondria and in Mycoplasma." [10]

But how did Ridley create "an opening for the creationists," if the genetic code variants are as insignificant as Kenneth Miller suggests?

Here we should consider a basic feature of the logic of scientific prediction. If a theory, T, strongly predicts a particular outcome, O, but O is not observed, then one has grounds for doubting T. Of course, this logical schema greatly oversimplifies how scientists may actually behave when met with a failed prediction. One can shift or broaden the prediction--"T didn't really predict O, but actually O plus something else"--or one can throw doubt onto some theory other than T, and blame it, rather than T, for the failed prediction.

The problem is that both of these solutions weaken one's case for the theory T. Any theory that predicts an observational outcome and its negation is a theory without much empirical power. "It will rain today and it won't rain today" tells one everything and therefore nothing. If common descent predicts that the genetic code will be universal, except when it is not universal, then common descent does not actually specify any observations about the code.

One might also say that some other theory, linked conceptually to common descent, is responsible for the failed prediction of universality. In this move, the truth of common descent is preserved while another part of our biological knowledge pays the cost. Most biologists working on the evolution of the code have taken this route; Niles Lehman of SUNY-Albany, for instance, writes:

"Once thought universal, the specific relationships between amino acids and codons that are collectively known as the genetic code are now proving to be variable in many taxa. While this realization has been disappointing to some--the genetic code was often hailed as the ultimate evolutionary anchor in that is universality was perhaps the indisputable piece of evidence that all life shared a common ancestor at some point--it has also opened up a rich field of evolutionary analysis by forcing us to consider what sequence of molecular events in a cell could possibly allow for codon reassignment." [11] 

Again, however, this move weakens the case for common descent. One preserves the truth of common descent only by cashing in one of the theory's predictions, namely, the universality of the code. "It seems we were wrong, after all, about the genetic code not being able to vary. So let's figure out how variant codes arise."

Well, how do variant code arise? Kenneth Miller doesn't say, but that is not surprising. No one really knows, although that is not for a lack of theories. Here we refer the curious reader to the superb review article by Knight, Freeland, and Landweber (2001), who list several different theories explaining codon change, none of which (they note) is unequivocally supported by the evidence.

Is it possible that the variant codes derived from a single common ancestor? Yes. 

It is also possible, of course, that they did not. Miller assumes that a single origin is the case, but there is a world of difference between assumptions and real knowledge.

These are matters for legitimate debate. What is not a matter for debate are the following facts:

The genetic code is not universal.

If the theory of common descent predicts a universal genetic code, then the theory predicts something that isn't so.

References

1. Robin D. Knight, Stephen J. Freeland, and Laura F. Landweber, "Rewiring the Keyboard: Evolvability of the Genetic Code," Nature Reviews Genetics, Vol. 2:49-58; p. 49 (2001).

2. Catherine A. Lozupone, Robin D. Knight and Laura F. Landweber, "The molecular basis of nuclear genetic code change in ciliates," Current Biology 11 (2001):65-74; Patrick J. Keeling and W. Ford Doolittle, "Widespread and Ancient Distribution of a Noncanonical Genetic Code in Diplomonads," Molecular Biology and Evolution 14 (1997):895-901; A. Baroin-Tourancheau, N. Tsao, L.A. Klobutcher, R.E. Pearlman, and A. Adoutte, "Genetic code deviations in the ciliates: evidence for multiple and independent events," EMBO Journal 14 (995):3262-3267. 

3. Robin D. Knight, Stephen J. Freeland, and Laura F. Landweber, "Rewiring the Keyboard: Evolvability of the Genetic Code," Nature Reviews Genetics 2 (2001):49-58; p. 49. 

4. Andrew L. Karamyshev, Koichi Ito, and Yoshikazu Nakamura, "Polypepetide release factor eRF1 from Tetrahymena themophila: cDNA cloning, purification and complex formation with yeast eRF3," FEBS Letters 457 (1999):483-488; p. 485. 

5. Ibid., p. 487.

6. Margaret E. Saks, Jeffrey R. Sampson, and John Abelson, "Evolution of a Transfer RNA Gene Through a Point Mutation in the Anticodon," Science 279 (13 March 1998):1665-1670.

7. Jennifer Normanly, Richard C. Ogden, Suzanna J. Horvath & John Abelson, "Changing the identity of a transfer RNA," Nature 321 (15 May 986):213-219. 

8. Justin M. O'Sullivan, J. Bernard Davenport and Mick F. Tuite, "Codon reassignment and the evolving genetic code: problems and pitfalls in post-genome analysis," Trends in Genetics 17 (2001):20-22; p. 21. 

9. Mark Ridley, The Problems of Evolution (Oxford: Oxford University Press, 1985), pp. 10-11.

10. Brian Charlesworth, "Darwinism is alive and well," review of The Problems of Evolution, New Scientist 11 July 1985, p. 58. 


11. Niles Lehman, "Please release me, genetic code," Current Biology 11 (2001):R63-R66; p. R63.