Search This Blog

Tuesday 2 May 2017

Insect navigator from down under Vs. Darwinism

A Monarch-Like Wonder from Mountains Down Under
Evolution News & Views

There's a little gray moth in Australia that does something extraordinary. Like the Monarch butterfly of North America, it migrates over long distances. Unlike the Monarch, it flies at night. And it doesn't even need to.

Current Biology describes this dull-colored little wonder, called the Bogong moth, as the "nocturnal counterpart of the migratory Monarch butterfly." Its summer home is as amazing as the mountain forests of Mexico where the Monarchs were discovered.

If you ever have the chance of hiking the Australian Alps in summer, you will find an ancient and beautiful mountain range. The grassy, treeless peaks, polished aeons ago by glaciers, are littered with countless granite boulders of all shapes and sizes. If you are not claustrophobic and dare to climb into one of the crevices formed by these rocky ensembles, your breath will be taken away, first by the dense clouds of ultra-fine, silvery dust drawn to your face by swift air currents channelled through the rock chimneys, and then by the sight of the source of the dust: hundreds of thousands of Bogong moths, neatly tiling the cave walls. In fact, there are about 17,000 of them per square meter, but you will only find them by chance if you are very lucky. This is because we only know of a handful of such caves, and the moths are present there only for four months during the height of the Australian summer. [Emphasis added.]

These moths were a source of food for aboriginal people, who found them in the mountain plains each summer. It took scientists more recent study to discover the rest of their "remarkable and interesting" tale: that they migrate a thousand kilometers from southern Queensland to these mountain caves each year. Here's how they outperform the Monarchs as navigators:

All this makes the Bogong moth, in many respects, similar to the iconic North American Monarch butterfly Danaus plexippus, except that it is a night-active species and therefore cannot use the sun for orientation. And unlike the Monarch butterfly, where the full forward and reverse migrations are performed by several generations, individual Bogong moths perform both migrations. If you think of the Monarch butterfly as the King of insect migration, the Bogong moth is certainly insect migration's Dark Lord.

Scientists don't know how they find their way without sunlight. Monarchs are known to use the polarization of light as it changes throughout the day; in fact, our neighbors at the University of Washington believe they have figured out the secret of the Monarchs' internal compass at long last. But Bogongs only have the moon, the stars and the earth's magnetic field to provide visual cues. While these might guide them in the basic direction, what leads them specifically to the caves?

The Bogong moth's journey can thus be divided into a long-distance part and a final travel segment that lets them locate their specific target site. As the two parts operate on very different spatial scales, the mechanisms employed, and the information used, are likely not identical. To find their caves, Bogong moths might, for example, use their sense of smell and be attracted to the carcasses of those family members that were not fit enough for last year's return trip.

Let's see if authors Stanley Heinze and Eric Warrant can provide a Darwinian explanation. "Given the lengthy, difficult, and often lethal journey, there must be substantial selection pressure driving these animals along their migratory cycle," evolutionary theory would expect. "Nevertheless, and again similar to the Monarch butterfly, not all populations of Bogong moths are migratory." In fact, they say, there are non-migrating populations of Bogongs at both ends of the route and in other places. This defies evolutionary expectations so clearly, the authors never return to the question of what selection pressures might possibly create this remarkable behavior. They only mention additional examples of insects with mixed populations of migrants and non-migrants, confessing that "the migratory movements of these species are either erratic or poorly understood."

The performance of the little night-flying Bogong moth is enough, by contrast, to generate rhapsodic praise:

Bogong moths pinpoint a tiny mountain cave from over a thousand kilometres away, crossing terrain they have never crossed previously, and locating a place they have never been to before. Moreover, they do all this at night, fuelled by a few drops of nectar and using a brain the size of a grain of rice. Don't even ask an engineer if they could build a robot equivalent! To achieve this remarkable behaviour, the moth brain has to integrate sensory information from multiple sources and compute its current heading relative to an internal compass. It then has to compare that heading to its desired migratory direction and translate any mismatch into compensatory steering commands, while maintaining stable flight in very dim light while buffeted by cold turbulent winds.

On top of all that, the moth has to switch all its computations to the opposite direction come autumn, and reverse all its learned behaviors. "Its simple nervous system and its fixed, reproducible behaviour stand in stark contrast to the complexity of the problem that the Bogong moth must solve."

Here's where intelligent design can make a contribution. Because science can employ "electrophysiology, neuroanatomy, and behavioural analysis" to study this stable population, we can bestow upon these insects a better reputation than accidental products of bind selection pressures. We can, instead, reverse engineer the software that takes neural circuits that underlie "nocturnal vision, sensory integration, motor control, action selection and state-dependent changes of behaviour." As a result of design thinking, we might even be able to apply the knowledge gained to our own designed systems.

Seeing in the Dark

A related paper in Current Biology examines the question of how moths see to fly at night. Here's the upshot:

A new study shows that moth vision trades speed and resolution for contrast sensitivity at night. These remarkable neural adaptations take place in the higher-order neurons of the hawkmoth motion vision pathway and allow the insects to see during night flights.

Author Petri Ala-Laurila waxes eloquent about the difficulty of operating in the dark.

Seeing under very dim light poses a formidable challenge for the visual system. In these conditions, visual signals originating in a small number of photoreceptor cells have to be detected against neural noise originating in a much larger number of such cells, as well as in the neural circuitry processing these sparse signals. The randomness of rare photon arrivals makes it even harder to form reliable visual percepts in dim light. Yet many species show remarkable visual capabilities at extremely low light levels.

We are on that list; "dark-adapted humans can detect just a few light quanta absorbed on a small region of the peripheral retina." Nevertheless, hawkmoths are experts at deriving the most from the least, along with cockroaches, dung beetles, toads and Central American sweat bees. "In all of these cases, the striking behavioral performance of animals in dim light exceeds that of individual receptor cells at their visual inputs by orders of magnitude."

The secret, the author explains, is in the processing. Take what you have, pool it, and boost it. Summing the inputs adds clarity over time. "In our own retina, rod photoreceptors used mainly at low light levels have a longer integration time than cone photoreceptors that we use in daytime," Ala-Laurila says. "This is one example of receptor-level temporal summation."

There are tradeoffs, however; "Unfortunately, there is no free lunch -- especially not in biology." Pooling and boosting adds noise, lowers resolution, and takes longer to compute. Imagine a moth flying in the dark, darting rapidly to avoid predators. You would think it needs a high-speed visual computer to do what it does. "Balancing sensitivity against acuity and speed is a trade-off problem where the optimal solution depends on light level and motion velocity." Remember back when we talked about optimization as an example of intelligent design science in action?

Ala-Laurila points to a study that quantified the amount of summation going on in the hawkmoth's brain and eyes. How the scientists did that is quite a trick, but they found out that the summation circuitry gives the moth a hundredfold boost in sensitivity, using nonlinear processing. A dim image, therefore, becomes quite bright as the scientists show in a comparison between original and processed images.

Another study we discussed last year showed that the moth's behavior is perfectly tuned to the motions of the flowers they seek at night for nectar. "These two studies together suggest that the neural mechanisms of the moth visual system have been matched perfectly to the requirements of its environment." What luck for Darwinian selection to get mutations in both systems to match up perfectly! Evolution is beautiful.

Similarly, it will be intriguing to understand the mechanisms that control the optimal tuning of spatial and temporal properties across multiple light levels in the moth. Recent studies have unraveled neural circuit mechanisms underlying luminance-dependent changes in the spatial summation of the vertebrate retina. Further mechanistic understanding of evolution as an innovator at visual threshold might even help us to build more sensitive and efficient night vision devices in the future. Aside from these potential future innovations, this study reveals above all some of the key neural secrets underlying the night flight of a moth in the wilderness. This understanding as such is simply beautiful.

Evolutionists at the University of Basel are even claiming that Darwinian evolution is helping moths adapt to city life by making them avoid bright lights. One can always invent a story about how blind processes achieve perfection, but returning to reality, we know design when we see it. Whether the Monarchs shown in Illustra Media's documentary Metamorphosis: The Beauty and Design of Butterflies or the night navigators described here (Bogong moth and hawkmoth), we just need to recognize what it points to.


There is "no free lunch -- especially not in biology." Aimless natural processes are woefully inadequate to deliver precision guided systems. Intelligence, by contrast, provides a feast for understanding.

Why slain myths become undead rather than stay buried.

Who Will Debunk The Debunkers?
By Daniel Engber


In 2012, network scientist and data theorist Samuel Arbesman published a disturbing thesis: What we think of as established knowledge decays over time. According to his book “The Half-Life of Facts,” certain kinds of propositions that may seem bulletproof today will be forgotten by next Tuesday; one’s reality can end up out of date. Take, for example, the story of Popeye and his spinach.

Popeye loved his leafy greens and used them to obtain his super strength, Arbesman’s book explained, because the cartoon’s creators knew that spinach has a lot of iron. Indeed, the character would be a major evangelist for spinach in the 1930s, and it’s said he helped increase the green’s consumption in the U.S. by one-third. But this “fact” about the iron content of spinach was already on the verge of being obsolete, Arbesman said: In 1937, scientists realized that the original measurement of the iron in 100 grams of spinach — 35 milligrams — was off by a factor of 10. That’s because a German chemist named Erich von Wolff had misplaced a decimal point in his notebook back in 1870, and the goof persisted in the literature for more than half a century.

By the time nutritionists caught up with this mistake, the damage had been done. The spinach-iron myth stuck around in spite of new and better knowledge, wrote Arbesman, because “it’s a lot easier to spread the first thing you find, or the fact that sounds correct, than to delve deeply into the literature in search of the correct fact.”

Arbesman was not the first to tell the cautionary tale of the missing decimal point. The same parable of sloppy science, and its dire implications, appeared in a book called “Follies and Fallacies in Medicine,” a classic work of evidence-based skepticism first published in 1989.1 It also appeared in a volume of “Magnificent Mistakes in Mathematics,” a guide to “The Practice of Statistics in the Life Sciences” and an article in an academic journal called “The Consequence of Errors.” And that’s just to name a few.

All these tellings and retellings miss one important fact: The story of the spinach myth is itself apocryphal. It’s true that spinach isn’t really all that useful as a source of iron, and it’s true that people used to think it was. But all the rest is false: No one moved a decimal point in 1870; no mistake in data entry spurred Popeye to devote himself to spinach; no misguided rules of eating were implanted by the sailor strip. The story of the decimal point manages to recapitulate the very error that it means to highlight: a fake fact, but repeated so often (and with such sanctimony) that it takes on the sheen of truth.

In that sense, the story of the lost decimal point represents a special type of viral anecdote or urban legend, one that finds its willing hosts among the doubters, not the credulous. It’s a rumor passed around by skeptics — a myth about myth-busting. Like other Russian dolls of distorted facts, it shows us that, sometimes, the harder that we try to be clear-headed, the deeper we are drawn into the fog.


No one knows this lesson better than Mike Sutton. He must be the world’s leading meta-skeptic: a 56-year-old master sleuth who first identified the myth about the spinach myth in 2010 and has since been working to debunk what he sees as other false debunkings. Sutton, a criminology professor at Nottingham Trent University, started his career of doubting very young: He remembers being told when he was still a boy that all his favorite rock stars on BBC’s “Top of the Pops” were lip-synching and that some weren’t even playing their guitars. Soon he began to wonder at the depths of this deception. Could the members of Led Zeppelin be in on this conspiracy? Was Jimmy Page a lie? Since then, Sutton told me via email, “I have always been concerned with establishing the veracity of what is presented as true, and what is something else.”

As a law student, Sutton was drawn to stories like that of Popeye and the inflated iron count in spinach, which to him demonstrated both the perils of “accepted knowledge” and the importance of maintaining data quality. He was so enamored of the story, in fact, that he meant to put it in an academic paper. But in digging for the story’s source, he began to wonder if it was true. “It drew me in like a problem-solving ferret to a rabbit hole,” he said.

Soon he’d gone through every single Popeye strip ever drawn by its creator, E.C. Segar, and found that certain aspects of the classic story were clearly false. Popeye first ate spinach for his super power in 1931, Sutton found, and in the summer of 1932 the strip offered this iron-free explanation: “Spinach is full of vitamin ‘A,’” Popeye said, “an’ tha’s what makes hoomans strong an’ helty.” Sutton also gathered data on spinach production from the U.S. Department of Agriculture and learned that it was on the rise before Segar’s sailor-man ever starting eating it.

It seems plausible that the tellers of these tales are getting blinkered by their own feelings of superiority — that the mere act of busting myths makes them more susceptible to spreading them.
What about the fabled decimal point? According to Sutton’s research, a German chemist did overestimate the quantity of iron in spinach, but the mistake arose from faulty methods, not from poor transcription of the data.2 By the 1890s, a different German researcher had concluded that the earlier estimate was many times too high. Subsequent analyses arrived at something closer to the correct, still substantial value — now estimated to be 2.71 milligrams of iron per 100 grams of raw spinach, according to the USDA. By chance, the new figure was indeed about one-tenth of the original, but the difference stemmed not from misplaced punctuation but from the switch to better methodology. In any case, it wasn’t long before Columbia University analytical chemist Henry Clapp Sherman laid out the problems with the original result. By the 1930s, Sutton argues, researchers knew the true amount of iron in spinach, but they also understood that not all of it could be absorbed by the human body.3

The decimal-point story only came about much later. According to Sutton’s research, it seems to have been invented by the nutritionist and self-styled myth-buster Arnold Bender, who floated the idea with some uncertainty in a 1972 lecture. Then in 1981, a doctor named Terence Hamblin wrote up a version of the story without citation for a whimsical, holiday-time column in the British Medical Journal. The Hamblin article, unscholarly and unsourced, would become the ultimate authority for all the citations that followed. (Hamblin graciously acknowledged his mistake after Sutton published his research, as did Arbesman.)

In 2014, a Norwegian anthropologist named Ole Bjorn Rekdal published an examination of how the decimal-point myth had propagated through the academic literature. He found that bad citations were the vector. Instead of looking for its source, those who told the story merely plagiarized a solid-sounding reference: “(Hamblin, BMJ, 1981).” Or they cited someone in between — someone who, in turn, had cited Hamblin. This loose behavior, Rekdal wrote, made the transposed decimal point into something like an “academic urban legend,” its nested sourcing more or less equivalent to the familiar “friend of a friend” of schoolyard mythology.

Emerging from the rabbit hole, Sutton began to puzzle over what he’d found. This wasn’t just any sort of myth, he decided, but something he would term a “supermyth”: A story concocted by respected scholars and then credulously disseminated in order to promote skeptical thinking and “to help us overcome our tendency towards credulous bias.” The convolution of this scenario inspired him to look for more examples. “I’m rather a sucker for such complexity,” he told me.


Complicated and ironic tales of poor citation “help draw attention to a deadly serious, but somewhat boring topic,” Rekdal told me. They’re grabby, and they’re entertaining. But I suspect they’re more than merely that: Perhaps the ironies themselves can help explain the propagation of the errors.

It seems plausible to me, at least, that the tellers of these tales are getting blinkered by their own feelings of superiority — that the mere act of busting myths makes them more susceptible to spreading them. It lowers their defenses, in the same way that the act of remembering sometimes seems to make us more likely to forget. Could it be that the more credulous we become, the more convinced we are of our own debunker bona fides? Does skepticism self-destruct?


Sutton told me over email that he, too, worries that contrarianism can run amok, citing conspiracy theorists and anti-vaxxers as examples of those who “refuse to accept the weight of argument” and suffer the result. He also noted the “paradox” by which a skeptic’s obsessive devotion to his research — and to proving others wrong — can “take a great personal toll.” A person can get lost, he suggested, in the subterranean “Wonderland of myths and fallacies.”

In the last few years, Sutton has himself embarked on another journey to the depths, this one far more treacherous than the ones he’s made before. The stakes were low when he was hunting something trivial, the supermyth of Popeye’s spinach; now Sutton has been digging in more sacred ground: the legacy of the great scientific hero and champion of the skeptics, Charles Darwin. In 2014, after spending a year working 18-hour days, seven days a week, Sutton published his most extensive work to date, a 600-page broadside on a cherished story of discovery. He called it “Nullius in Verba: Darwin’s Greatest Secret.”

Sutton’s allegations are explosive. He claims to have found irrefutable proof that neither Darwin nor Alfred Russel Wallace deserves the credit for the theory of natural selection, but rather that they stole the idea — consciously or not — from a wealthy Scotsman and forest-management expert named Patrick Matthew. “I think both Darwin and Wallace were at the very least sloppy,” he told me. Elsewhere he’s been somewhat less diplomatic: “In my opinion Charles Darwin committed the greatest known science fraud in history by plagiarizing Matthew’s” hypothesis, he told the Telegraph. “Let’s face the painful facts,” Sutton also wrote. “Darwin was a liar. Plain and simple.”

Some context: The Patrick Matthew story isn’t new. Matthew produced a volume in the early 1830s, “On Naval Timber and Arboriculture,” that indeed contained an outline of the famous theory in a slim appendix. In a contemporary review, the noted naturalist John Loudon seemed ill-prepared to accept the forward-thinking theory. He called it a “puzzling” account of the “origin of species and varieties” that may or may not be original. In 1860, several months after publication of “On the Origin of Species,” Matthew would surface to complain that Darwin — now quite famous for what was described as a discovery born of “20 years’ investigation and reflection” — had stolen his ideas.

Darwin, in reply, conceded that “Mr. Matthew has anticipated by many years the explanation which I have offered of the origin of species, under the name of natural selection.” But then he added, “I think that no one will feel surprised that neither I, nor apparently any other naturalist, had heard of Mr. Matthew’s views.”

That statement, suggesting that Matthew’s theory was ignored — and hinting that its importance may not even have been quite understood by Matthew himself — has gone unchallenged, Sutton says. It has, in fact, become a supermyth, cited to explain that even big ideas amount to nothing when they aren’t framed by proper genius.

Sutton thinks that story has it wrong, that natural selection wasn’t an idea in need of a “great man” to propagate it. After all his months of research, Sutton says he found clear evidence that Matthew’s work did not go unread. No fewer than seven naturalists cited the book, including three in what Sutton calls Darwin’s “inner circle.” He also claims to have discovered particular turns of phrase — “Matthewisms” — that recur suspiciously in Darwin’s writing.

In light of these discoveries, Sutton considers the case all but closed. He’s challenged Darwin scholars to debates, picked fights with famous skeptics such as Michael Shermer and Richard Dawkins, and even written letters to the Royal Society, demanding that Matthew be given priority over Darwin.

But if his paper on the spinach myth convinced everyone who read it — even winning an apology from Terence Hamblin, one of the myth’s major sources — the work on Darwin barely registered. Many scholars ignored it altogether. A few, such as Michael Weale of King’s College, simply found it unconvincing. Weale, who has written his own book on Patrick Matthew, argued that Sutton’s evidence was somewhat weak and circumstantial. “There is no ‘smoking gun’ here,” he wrote, pointing out that at one point even Matthew admitted that he’d done little to spread his theory of natural selection. “For more than thirty years,” Matthew wrote in 1862, he “never, either by the press or in private conversation, alluded to the original ideas … knowing that the age was not suited for such.”


When Sutton is faced with the implication that he’s taken his debunking too far — that he’s tipped from skepticism to crankery — he lashes out. “The findings are so enormous that people refuse to take them in,” he told me via email. “The enormity of what has, in actual fact, been newly discovered is too great for people to comprehend. Too big to face. Too great to care to come to terms with — so surely it can’t be true. Only, it’s not a dream. It is true.” In effect, he suggested, he’s been confronted with a classic version of the “Semmelweis reflex,” whereby dangerous, new ideas are rejected out of hand.

Could Sutton be a modern-day version of Ignaz Semmelweis, the Hungarian physician who noticed in the 1840s that doctors were themselves the source of childbed fever in his hospital’s obstetric ward? Semmelweis had reduced disease mortality by a factor of 10 — a fully displaced decimal point — simply by having doctors wash their hands in a solution of chlorinated lime. But according to the famous tale, his innovations were too radical for the time. Ignored and ridiculed for his outlandish thinking, Semmelweis eventually went insane and died in an asylum. Arbesman, author of “The Half-Life of Facts,” has written about the moral of this story too. “Even if we are confronted with facts that should cause us to update our understanding of the way the world works,” he wrote, “we often neglect to do so.”

Of course, there’s always one more twist: Sutton doesn’t believe this story about Semmelweis. That’s another myth, he says — another tall tale, favored by academics, that ironically demonstrates the very point that it pretends to make. Citing the work of Sherwin Nuland, Sutton argues that Semmelweis didn’t go mad from being ostracized, and further that other physicians had already recommended hand-washing in chlorinated lime. The myth of Semmelweis, says Sutton, may have originated in the late 19th century, when a “massive nationally funded Hungarian public relations machine” placed biased articles into the scientific literature. Semmelweis scholar Kay Codell Carter concurs, at least insofar as Semmelweis was not, in fact, ignored by the medical establishment: From 1863 through 1883, he was cited dozens of times, Carter writes, “more frequently than almost anyone else.”

Yet despite all this complicating evidence, scholars still tell the simple version of the Semmelweis story and use it as an example of how other people — never them, of course — tend to reject information that conflicts with their beliefs. That is to say, the scholars reject conflicting information about Semmelweis, evincing the Semmelweis reflex, even as they tell the story of that reflex. It’s a classic supermyth!

And so it goes, a whirligig of irony spinning around and around, down into the depths. Is there any way to escape this endless, maddening recursion? How might a skeptic keep his sanity? I had to know what Sutton thought. “I think the solution is to stay out of rabbit holes,” he told me. Then he added, “Which is not particularly helpful advice.”

Footnotes

Its authors cite the story of the misplaced decimal point as an example of the “Bellman’s Fallacy” — a reference to a character from Lewis Carroll who says, “What I tell you three times is true.” Such mistakes, they wrote, illustrate “the ways in which truth may be obscured, twisted, or mangled beyond recognition, without any overt intention to do it harm.” ^
Another scholar with an interest in the spinach tale has found that in Germany, at least, the link between spinach and iron was being cited as conventional wisdom as early as 1853. This confusion may have been compounded by research that elided differences between dried and fresh spinach, Sutton says. ^
It’s long been suggested that high levels of oxalic acid — which are present in spinach — might serve to block absorption of iron, as they do for calcium, magnesium and zinc. Other studies find that oxalic acid has no effect on iron in the diet, though, and hint that some other chemical in spinach might be getting in the way. ^

How universal is language of life?

Reply To Kenneth Miller On The Genetic Code
Discovery Institute's Center for Science & Culture
Discovery Institute


On Tuesday, September 25, 2001, Professor Kenneth Miller of Brown University issued a press release entitled "A 'Dying Theory' Fails Again," available here: 

http://www.ncseweb.org/resources/articles/3071_km-3.pdf

In this document, Miller claims that the Discovery Institute (DI) tried to "smear" PBS's Evolution series when the DI charged that program with making a false statement about the universality of the genetic code. Miller also claims that the DI failed to tell the public that "the very discoveries they cite provide elegant and unexpected support for Darwin's theories."

These claims are false. Miller's press release, however, provides an excellent teaching opportunity for the DI, not only to show why Miller's claims are false, but also to amplify our original objection. We shall explain why statements such as "the genetic code is universal" not only harm science -- by creating what Charles Darwin called "false facts" -- but also cheat the public, by concealing the real puzzles facing evolutionary theory. We conclude by touching on some of the deeper issues raised by patterns of evidence such as the genetic code.

We begin with the errors and misrepresentations in Miller's press release.

Miller completely misrepresents the significance of a diagram reproduced in his press release from another source (Knight et al. 2001, Figure 2). This is a serious mistake, as Miller rests his case against the DI on his misunderstanding of this diagram.

Miller equates genetic code variants to minor differences in dialects of the same spoken language (e.g., English). This comparison is erroneous and misleading.

Miller claims that the successes of biotechnology prove the universality of the code. This is untrue, and ignores the literature on experiments employing organisms with variant codes.

Let's consider each problem in more detail:

1. Miller completely misrepresents Knight et al.'s composite phylogeny of genetic codes.

In his press release, Miller writes:

"Look closely at the figure from this paper, and you;ll see something remarkable. The variations from the standard code occur in regular patterns that can be traced directly back to the standard code, which sits at the center of the diagram."

This is false. The variant codes do not "occur in regular patterns," but appear independently in unrelated lineages. Knight et al. explain this pattern of convergent (i.e., non-homologous) appearance in the article itself:

"The genetic code varies in a wide range of organisms (FIG. 2 [reproduced in Miller's press release], some of which share no obvious similarities. Sometimes the same change recurs in different lineages: for instance, the UAA and UAG codons have been reassigned from Stop to Gln in some diplomonads, in several species of ciliates and in the green alga Acetabularia acetabulum (reviewed in Ref. 5). Similarly, animal and yeast mitochondria have independently reassigned AUA from Ile to Met." [1] 

In their caption to Figure 2, Knight et al. note explicitly that variant codes have arisen "repeatedly and independently in different taxa." This pattern of convergent variation has generated much discussion in the primary literature. [2] If these are indeed convergent changes, they do not provide evidence of common descent at all, but rather would be misleading similarities that, taken by themselves, generate a false history of the organisms in question.

In short, Miller completely misrepresents the Knight et al. composite phylogeny. There is no "regular pattern" to the variant codes that maps congruently onto phylogenetic trees from other data. Thus, far from providing what Miller calls "unexpected confirmation of the evolution of the code from a single common ancestor," the pattern of variant codes represents a puzzle for a single tree of life. 

2. Variant genetic codes are not analogous to the differences between dialects of the same language.

In his press release, Miller writes:

"As evolutionary biologists were quick to realize, slight differences in the genetic code are similar to differences between the dialects of a single spoken language. The differences in spelling and word meanings between the American, Canadian, and British dialects of English reflect a common origin. Exactly the same is true for the universal language of DNA."

This is--at best--a wildly inaccurate analogy. From context and other clues, English speakers can discern that the words "center" and "centre," or "color" and "colour," refer to the same object. Meaning is preserved by context, and the reader moves along without a hitch.

But a gene sequence from a ciliated protozoan such as Tetrahymena (for instance), with the codons UAA and UAG in its open reading frame (ORF), cannot be interpreted correctly by the translation machinery of other eukaryotes having the so-called "universal" code. In Tetrahymena, UAA and UAG code for glutamine. In the universal code, these are stop codons. Thus the translation machinery of most other eukaryotes, when reading the Tetrahymena gene, would stop at UAA or UAG. Instead of inserting glutamine into the growing polypeptide chain, and continuing to translate the mRNA, release factors would bind to the codons, and the ribosomes would halt protein synthesis. The resulting protein would be truncated in length and very possibly non-functional. Unlike variant spellings of "center," therefore, context cannot preserve meaning. With the codons UAA and UAG (comparing Tetraphymena thermophila and other eukaryotes) no shared context exists.

Knight et al. present a much better analogy for code changes:

"Any change in the genetic code alters the meaning of a codon, which, analogous to reassigning a key on a keyboard, would introduce errors into every translated message." [3]

Indeed, for two decades (see below), it was exactly this deeply-embedded feature of the genetic code that led to strong predictions about its necessary universality across all organisms. It was widely thought that any change to the genetic code of an organism would affect all the proteins produced by that organism, leading to deleterious consequences (e.g., truncated or misfolded proteins) or lethality. Once the code evolved in the progenitor of all life, it "froze," and all subsequent organisms would carry that code.

In any case, the differences between genetic codes are not properly analogous to minor differences among dialects of a single language. 

3. Miller's references to biotechnology do not accurately represent the experimental literature on variant genetic codes.

In his press release, Miller writes:

"...the entire biotechnology industry is built upon the universality of the genetic code. Genetically-modified organisms are routinely created in the lab by swapping genes between bacteria, plants, animals, and viruses. If the coded instructions in those genes were truly as different as the critics of evolution would have you believe, none of these manipulations would work."

But some manipulations--namely, those involving organisms with variant codes--do not work, unless the researchers themselves intervene to ensure function. 

Consider, for instance, the release factor from the ciliate Tetrahymena thermophila. Release factors (in eukaryotes, these proteins are abbreviated as "eRF" to distinguish them from prokaryotic release factors) catalyze the separation of completed polypeptide chains (nascent proteins) from the ribosomal machinery. Unlike other eukaryotic release factors, however, that recognize all three stop codons (UAA, UGA, and UAG), the Tetrahymena thermophila release factor recognizes only the UGA codon as "stop."

In 1999, Andrew Karamyshev and colleagues at the University of Tokyo isolated the release factor (Tt-eRF1) from Tetrahymena thermophila. But in order to express and purify the protein, Karamyshev et al. had to manipulate it genetically first. Why? The Tetrahymena thermophila gene for Tt-eRF1 contains 10 codons in its open reading frame that would be interpreted as "stop" by other organisms--whereas Tetrahymena thermophila reads these codons as glutamine:

"To express and purify the recombinant Tt-eRF1 protein under heterologous expression conditions [i.e., in a cell other than Tetrahymena--Karamyshev et al. used yeast cells], 10 UAA/UAG triplets within the coding sequence were changed to the glutamine codon CAA or CAG by site-directed mutagenesis." [4]

Furthermore, Tt-eRF1 would not function when employed in combination with ribosomes (translation machinery) from other species:

"In spite of the overall conservative protein structure of Tt-eRF1 compared with mammalian and yeast eRF1s, the soluble recombinant Tt-eRF1 did not show any polypeptide release activity in vitro using rat or Artemia ribosomes." [5] Thus, when using an organism with a variant code (Tetrahymena thermophila), researchers found that

They needed to modify (i.e., intelligently manipulate) the gene sequences so that they could be expressed by other organisms, and

They discovered that a key component of the genetic code (namely, the release factor that terminates translation) would not function properly with the translation machinery of other organisms.

Experiments to change the identity of transfer RNA (tRNA)--another possible mechanism by which genetic codes might reassign codon "meanings"--have shown that the intermediate steps must be bridged by intelligent (directed) manipulation. In one such experiment, for instance, Margaret Saks, John Abelson, and colleagues at Caltech changed an E. coli arginine tRNA to specify a different amino acid, threonine. They accomplished this, however, only by supplying the bacterial cells (via a plasmid) with another copy of the wild-type threonine tRNA gene. This intelligently-directed intervention bridged the critical transition stage during which the arginine tRNA was being modified by mutations to specify threonine. [6] Indeed, in reporting on an earlier experiment to modify tRNA, Abelson and colleagues noted that "if multiple changes are required to alter the specificity of a tRNA, they cannot be selected but they can be constructed" [7]--constructed, that is, by intelligent design. We stress here that, in contrast to Miller's blithe dismissal of the difficulties raised for biotechnology by variant genetic codes, experts in the field caution that assuming a "universal" code may lead to serious problems. In a recent article on the topic entitled "Codon reassignment and the evolving genetic code: problems and pitfalls in post-genome analysis," Justin O'Sullivan and colleagues at the University of Kent observe:

"The emerging non-universal nature of the genetic code, coupled with the fact that few genetic codes have been experimentally confirmed, has several serious implications for the post-genome era. The production of biologically active recombinant molecules requires that careful consideration be given to both the expression system and the original host genome. The substitution of amino acids within a protein encoded by a nonstandard genetic code could alter the structure, function or antibody recognition of the final product." [8]

Thus, Miller's statements on biotechnology are highly misleading. Variant codes are not a minor matter easily overcome in experiments using different organisms.

We conclude by considering some of the deeper issues raised by Miller's press release. 

A little history and some basic logic

Not so very long ago, the universality of the genetic code was widely regarded as an important prediction (or confirmation) of the theory of common descent. Consider, for instance, an evolutionary biology textbook by the zoologist Mark Ridley, entitled The Problems of Evolution (Oxford University Press, 1985). In his first chapter, "Is Evolution True?" Ridley argues that common descent predicts a universal genetic code. His formulation of this argument mirrors dozens of similar arguments present in the biological literature from the mid-1960s to the mid-1980s:

"The outstanding example of a universal homology is the genetic code...The universality of the code is easy to understand if every species is descended from a common ancestor. Whatever code was used by the common ancestor would, through evolution, be retained. It would be retained because any change in it would be disastrous. A single change would cause all the proteins of the body, perfected over millions of years, to be built wrongly; no such body could live. It would be like trying to communicate, but having swapped letters around in words; if you change every 'a' for an 'x', for example, and tried talking to people, they would not make much sense of it. Thus we expect the genetic code to be universal if all species have descended from a common ancestor." [9]

Shortly after Ridley's argument was published in The Problems of Evolution, the evolutionary biologist Brian Charlesworth reviewed the book. He cautioned that Ridley was "less sound on the more modern aspects" of evolution, including the genetic code. Ridley's genetic code argument, Charlesworth worried,

"provides an opening for the creationists by asserting that the genetic code is universal, whereas it is now known that slight deviations from the standard code occur in mitochondria and in Mycoplasma." [10]

But how did Ridley create "an opening for the creationists," if the genetic code variants are as insignificant as Kenneth Miller suggests?

Here we should consider a basic feature of the logic of scientific prediction. If a theory, T, strongly predicts a particular outcome, O, but O is not observed, then one has grounds for doubting T. Of course, this logical schema greatly oversimplifies how scientists may actually behave when met with a failed prediction. One can shift or broaden the prediction--"T didn't really predict O, but actually O plus something else"--or one can throw doubt onto some theory other than T, and blame it, rather than T, for the failed prediction.

The problem is that both of these solutions weaken one's case for the theory T. Any theory that predicts an observational outcome and its negation is a theory without much empirical power. "It will rain today and it won't rain today" tells one everything and therefore nothing. If common descent predicts that the genetic code will be universal, except when it is not universal, then common descent does not actually specify any observations about the code.

One might also say that some other theory, linked conceptually to common descent, is responsible for the failed prediction of universality. In this move, the truth of common descent is preserved while another part of our biological knowledge pays the cost. Most biologists working on the evolution of the code have taken this route; Niles Lehman of SUNY-Albany, for instance, writes:

"Once thought universal, the specific relationships between amino acids and codons that are collectively known as the genetic code are now proving to be variable in many taxa. While this realization has been disappointing to some--the genetic code was often hailed as the ultimate evolutionary anchor in that is universality was perhaps the indisputable piece of evidence that all life shared a common ancestor at some point--it has also opened up a rich field of evolutionary analysis by forcing us to consider what sequence of molecular events in a cell could possibly allow for codon reassignment." [11] 

Again, however, this move weakens the case for common descent. One preserves the truth of common descent only by cashing in one of the theory's predictions, namely, the universality of the code. "It seems we were wrong, after all, about the genetic code not being able to vary. So let's figure out how variant codes arise."

Well, how do variant code arise? Kenneth Miller doesn't say, but that is not surprising. No one really knows, although that is not for a lack of theories. Here we refer the curious reader to the superb review article by Knight, Freeland, and Landweber (2001), who list several different theories explaining codon change, none of which (they note) is unequivocally supported by the evidence.

Is it possible that the variant codes derived from a single common ancestor? Yes. 

It is also possible, of course, that they did not. Miller assumes that a single origin is the case, but there is a world of difference between assumptions and real knowledge.

These are matters for legitimate debate. What is not a matter for debate are the following facts:

The genetic code is not universal.

If the theory of common descent predicts a universal genetic code, then the theory predicts something that isn't so.

References

1. Robin D. Knight, Stephen J. Freeland, and Laura F. Landweber, "Rewiring the Keyboard: Evolvability of the Genetic Code," Nature Reviews Genetics, Vol. 2:49-58; p. 49 (2001).

2. Catherine A. Lozupone, Robin D. Knight and Laura F. Landweber, "The molecular basis of nuclear genetic code change in ciliates," Current Biology 11 (2001):65-74; Patrick J. Keeling and W. Ford Doolittle, "Widespread and Ancient Distribution of a Noncanonical Genetic Code in Diplomonads," Molecular Biology and Evolution 14 (1997):895-901; A. Baroin-Tourancheau, N. Tsao, L.A. Klobutcher, R.E. Pearlman, and A. Adoutte, "Genetic code deviations in the ciliates: evidence for multiple and independent events," EMBO Journal 14 (995):3262-3267. 

3. Robin D. Knight, Stephen J. Freeland, and Laura F. Landweber, "Rewiring the Keyboard: Evolvability of the Genetic Code," Nature Reviews Genetics 2 (2001):49-58; p. 49. 

4. Andrew L. Karamyshev, Koichi Ito, and Yoshikazu Nakamura, "Polypepetide release factor eRF1 from Tetrahymena themophila: cDNA cloning, purification and complex formation with yeast eRF3," FEBS Letters 457 (1999):483-488; p. 485. 

5. Ibid., p. 487.

6. Margaret E. Saks, Jeffrey R. Sampson, and John Abelson, "Evolution of a Transfer RNA Gene Through a Point Mutation in the Anticodon," Science 279 (13 March 1998):1665-1670.

7. Jennifer Normanly, Richard C. Ogden, Suzanna J. Horvath & John Abelson, "Changing the identity of a transfer RNA," Nature 321 (15 May 986):213-219. 

8. Justin M. O'Sullivan, J. Bernard Davenport and Mick F. Tuite, "Codon reassignment and the evolving genetic code: problems and pitfalls in post-genome analysis," Trends in Genetics 17 (2001):20-22; p. 21. 

9. Mark Ridley, The Problems of Evolution (Oxford: Oxford University Press, 1985), pp. 10-11.

10. Brian Charlesworth, "Darwinism is alive and well," review of The Problems of Evolution, New Scientist 11 July 1985, p. 58. 


11. Niles Lehman, "Please release me, genetic code," Current Biology 11 (2001):R63-R66; p. R63.