Search This Blog

Saturday 6 May 2017

On Peace :The Watchtower Society's Commentary.

PEACE:

Sha·lohmʹ, the Hebrew word rendered “peace,” refers to the state of being free from war or disturbance (Jg 4:17; 1Sa 7:14; 1Ki 4:24; 2Ch 15:5; Job 21:9; Ec 3:8); it can convey the idea of health, safety, soundness (Ge 37:14, ftn), welfare (Ge 41:16), friendship (Ps 41:9), and entirety or completeness (Jer 13:19). The Greek word for peace (ei·reʹne) has taken on the same broad connotations as the Hebrew word sha·lohmʹ and may express the ideas of well-being, salvation, and concord, in addition to the absence of conflict. It occurs in the farewell exclamation “go in peace,” which somewhat corresponds to the expression ‘may it go well with you.’—Mr 5:34; Lu 7:50; 8:48; Jas 2:16; compare 1Sa 1:17; 20:42; 25:35; 29:7; 2Sa 15:9; 2Ki 5:19.

Since “peace” is not always the exact equivalent for the original-language words, the context must be taken into consideration to determine what is meant. For example, to be ‘sent away in peace’ could signify being sent away amicably, with no fear of interference from the one granting permission to leave. (Ge 26:29; 44:17; Ex 4:18) To ‘return in peace,’ as from battle, meant returning unharmed or victoriously. (Ge 28:21; Jos 10:21; Jg 8:9; 11:31; 2Ch 18:26, 27; 19:1) ‘Asking concerning the peace’ of a person meant inquiring as to how he was getting along. (Ge 29:6, ftn; 43:27, ftn) ‘Working for the peace’ of someone denoted working for that one’s welfare. (De 23:6) For a person to die in peace could mean his dying a tranquil death after having enjoyed a full life or the realization of a cherished hope. (Compare Ge 15:15; Lu 2:29; 1Ki 2:6.) The prophecy concerning Josiah’s ‘being gathered to his own graveyard in peace’ indicated that he would die before the foretold calamity upon Jerusalem. (2Ki 22:20; 2Ch 34:28; compare 2Ki 20:19.) At Isaiah 57:1, 2 the righteous one is depicted as entering into peace at death, thereby escaping calamity.

Acquiring Peace. Jehovah is the God of peace (1Co 14:33; 2Co 13:11; 1Th 5:23; Heb 13:20) and the Source of peace (Nu 6:26; 1Ch 22:9; Ps 4:8; 29:11; 147:14; Isa 45:7; Ro 15:33; 16:20), it being a fruit of his spirit. (Ga 5:22) For this reason true peace can be had only by those who are at peace with God. Serious transgressions put a strain on a person’s relationship with God and cause the individual to be disturbed. The psalmist said: “There is no peace in my bones on account of my sin.” (Ps 38:3) Those who desire to seek and pursue peace must therefore “turn away from what is bad, and do what is good.” (Ps 34:14) Without righteousness, there can be no peace. (Ps 72:3; 85:10; Isa 32:17) That is why the wicked cannot have peace. (Isa 48:22; 57:21; compare Isa 59:2-8.) On the other hand, peace is the possession of those who are fully devoted to Jehovah, love his law (Ps 119:165), and heed his commandments.—Isa 48:18.

When Christ Jesus was on earth, neither the natural Jews nor the non-Jews were at peace with Jehovah God. Having transgressed God’s law, the Jews had come under the curse of the Law. (Ga 3:12, 13) As for the non-Jews outside God’s covenant, they “had no hope and were without God in the world.” (Eph 2:12) However, by means of Christ Jesus both peoples were given the opportunity to come into a peaceful relationship with God. Pointing forward to this was the angelic announcement made to shepherds at Jesus’ birth: “Upon earth peace among men of goodwill.”—Lu 2:14.

The peaceful message proclaimed by Jesus and his followers appealed to ‘friends of peace,’ that is, to persons desiring to be reconciled to God. (Mt 10:13; Lu 10:5, 6; Ac 10:36) At the same time this message caused divisions in households, as some accepted it while others rejected it. (Mt 10:34; Lu 12:51) The majority of the Jews rejected the message and thus failed to discern “the things having to do with peace,” evidently including repentance and acceptance of Jesus as the Messiah. (Compare Lu 1:79; 3:3-6; Joh 1:29-34.) Their failure resulted in the destruction of Jerusalem by the Roman armies in 70 C.E.—Lu 19:42-44.

However, even the Jews who did accept “the good news of peace” were sinners and needed to have their transgressions atoned for so as to enjoy peace with Jehovah God. Jesus’ death as a ransom sacrifice cared for this need. As had been foretold: “The chastisement meant for our peace was upon him, and because of his wounds there has been a healing for us.” (Isa 53:5) Jesus’ sacrificial death on the torture stake also provided the basis for canceling the Mosaic Law, which divided the Jews from the non-Jews. Therefore, upon becoming Christians, both peoples could be at peace with God and with one another. The apostle Paul wrote: “[Jesus] is our peace, he who made the two parties one and destroyed the wall in between that fenced them off. By means of his flesh he abolished the enmity, the Law of commandments consisting in decrees, that he might create the two peoples in union with himself into one new man and make peace; and that he might fully reconcile both peoples in one body to God through the torture stake, because he had killed off the enmity by means of himself. And he came and declared the good news of peace to you, the ones far off, and peace to those near, because through him we, both peoples, have the approach to the Father by one spirit.”—Eph 2:14-18; compare Ro 2:10, 11; Col 1:20-23.

“The peace of God,” that is the calmness and tranquillity that result from a Christian’s precious relationship with Jehovah God, guards his heart and mental powers from becoming anxious about his needs. He has the assurance that Jehovah God provides for his servants and answers their prayers. This puts his heart and mind at rest. (Php 4:6, 7) Similarly, the peace that Jesus Christ gave to his disciples, based on their faith in him as God’s Son, served to calm their hearts and minds. Although Jesus told them that the time was coming when he would no longer be with them personally, they had no reason to be concerned or to give way to fear. He was not leaving them without help but promised to send them the holy spirit.—Joh 14:26, 27; 16:33; compare Col 3:15.

The peace that Christians enjoyed was not to be taken for granted. They were to be “peaceable”; that is, they were to be peacemakers, going out of their way to establish and to maintain peace. (1Th 5:13) To preserve peace among themselves, they had to exercise care so as not to stumble fellow believers. (Ro 14:13-23) In the Sermon on the Mount, Jesus stated: “Happy are the peaceable [literally, peacemakers], since they will be called ‘sons of God.’” (Mt 5:9, ftn; compare Jas 3:18.) Christians were counseled to pursue peace and to do their utmost to be found at peace with God. (2Ti 2:22; Heb 12:14; 1Pe 3:11; 2Pe 3:14) Therefore, they had to fight against the desires of the flesh, as these would cause them to be at enmity with God. (Ro 8:6-8) The fact that remaining in a peaceful relationship with God was necessary for divine approval lends much weight to the oft-repeated prayerful expression ‘may you have peace.’—Ro 1:7; 1Co 1:3; 2Co 1:2; Ga 1:3; 6:16; Eph 1:2; 6:23; Php 1:2.

Christians also wanted others to enjoy peace. Therefore, “shod with the equipment of the good news of peace,” they carried on their spiritual warfare. (Eph 6:15) Even within the congregation they waged warfare in overturning reasonings that were out of harmony with the knowledge of God, so that these reasonings did not damage their relationship with God. (2Co 10:4, 5) However, it was not a verbal fight or quarrel, not even when correcting those who had deviated from the truth. With reference to handling cases of those who had departed from a right course, the apostle Paul counseled Timothy: “A slave of the Lord does not need to fight, but needs to be gentle toward all, qualified to teach, keeping himself restrained under evil, instructing with mildness those not favorably disposed; as perhaps God may give them repentance leading to an accurate knowledge of truth, and they may come back to their proper senses out from the snare of the Devil, seeing that they have been caught alive by him for the will of that one.”—2Ti 2:24-26.

Peaceful Rule. The Son of God, as the one to have ‘the princely rule upon his shoulder,’ is called the “Prince of Peace.” (Isa 9:6, 7) It is, therefore, noteworthy that Christ Jesus, while on earth, showed that his servants should not arm themselves for physical warfare, when saying to Peter: “Return your sword to its place, for all those who take the sword will perish by the sword.” (Mt 26:52) Figuratively speaking, those who became Christians “beat their swords into plowshares and their spears into pruning shears.” They learned war no more. (Isa 2:4) This and God’s past activities, especially in connection with Israel during Solomon’s reign, point to the peace that will prevail during Jesus’ rule as King. Regarding Solomon’s reign, the Bible reports: “Peace itself became his in every region of his, all around. And Judah and Israel continued to dwell in security, everyone under his own vine and under his own fig tree, from Dan to Beer-sheba, all the days of Solomon.” (1Ki 4:24, 25; 1Ch 22:9) As is evident from other scriptures (compare Ps 72:7, 8; Mic 4:4; Zec 9:9, 10; Mt 21:4, 5), this served as a pattern of what would take place under the rule of Christ Jesus, the One greater than Solomon, which name comes from a root meaning “peace.”—Mt 12:42.

Peace Between Man and Animals. Jehovah God promised to the Israelites, if obedient: “I will put peace in the land, and you will indeed lie down, with no one making you tremble; and I will make the injurious wild beast cease out of the land.” (Le 26:6) This meant that the wild animals would stay within the confines of their habitat and not bring harm to the Israelites and their domestic animals. On the other hand, if the Israelites proved to be disobedient, Jehovah would allow their land to be invaded and devastated by foreign armies. As this would result in reducing the population, wild animals would multiply, penetrate formerly inhabited areas, and do injury to the survivors and their domestic animals.—Compare Ex 23:29; Le 26:22; 2Ki 17:5, 6, 24-26.

The peace promised to the Israelites in connection with the wild animals differed from that enjoyed by the first man and woman in the garden of Eden, for Adam and Eve enjoyed full dominion over the animal creation. (Ge 1:28) By contrast, in prophecy, like dominion is attributed only to Christ Jesus. (Ps 8:4-8; Heb 2:5-9) Therefore, it is under the government of Jesus Christ, “a twig out of the stump of Jesse,” or God’s “servant David,” that peace will again prevail between men and the animals. (Isa 11:1, 6-9; 65:25; Eze 34:23-25) These last cited texts have a figurative application, for it is obvious that the peace between animals, such as the wolf and the lamb, there described did not find literal fulfillment in ancient Israel. It was thus foretold that persons of harmful, beastlike disposition would cease their vicious ways and live in peace with their more docile neighbors. However, the prophetic use of the animals figuratively to portray the peaceful conditions to prevail among God’s people implies that there will also be peace among literal animals under the rule of Christ Jesus, even as there evidently was in Eden.

Friday 5 May 2017

An Extrapolation revisited II

The Nylonase Story: How Unusual Is That?
Ann Gauger

Editor’s note: Nylon is a modern synthetic product used in the manufacturing, most familiarly, of ladies’ stockings but also a range of other goods, from rope to parachutes to auto tires. Nylonase is a popular evolutionary icon, brandished by theistic evolutionist Dennis Venema among others. In a series of three posts, of which this is the second, Discovery Institute biologist Ann Gauger takes a closer look.

In an article yesterday, The Nylonase Story: When Imagination and Facts Collide,” I described how some biologists claim that the enzyme nylonase demonstrates that it is easy to get new functional proteins. It has been proposed that nylonase is the result of a frameshift mutation that produced an entirely new coding sequence from an alternate reading frame. I showed why such a claim is false. Now I will explain what that means and something about the unusual properties of the nylB gene that caught molecular geneticist and evolutionary biologist Susumu Ohno’s attention.

What are alternate reading frames? To answer that question, I first need to provide some background information. I will begin by defining some terms I used in yesterday’s post. DNA is composed of two anti-parallel strands of nucleotides. The order of the nucleotides in each strand is what specifies the information the DNA carries. The two strands, called the sense and antisense strands, run in opposite directions. Even though their sequences are complementary, with A always paired with T, and C with G, each strand carries different potential information.

ATG GCA TGC ACC GGC ATT AG → sense
TAC CGT ACG TGG CCG TAA TC ← antisense

Before the information in DNA can be used, it must be copied into what we call messenger RNA. The sequence of one strand of DNA, usually the sense strand, is copied using the same base complementarity: G pairs with C, and A with U (U is used in place of T in RNA). We call that copying transcription. The message that has been transcribed from the DNA into that sequence of RNA is now ready to be translated into protein.


Notice the language of information shot throughout these processes. The names for these processes were given by men fully committed to a naturalistic worldview, men such as Francis Crick and Sydney Brenner. Indeed, they were materialists one and all. Yet they saw the parallels between these processes and the human manipulation of text (language) or code (another form of language). The genetic code is the framework that determines the relationship between groups of nucleotides (codons), and the amino acids they specify. The code specifies how to translate the messenger RNA that has been copied or transcribed from the DNA, so that it can be translated into a new language, the language of proteins. Below is an illustration of the standard genetic code (source here, used with permission):




Notice that the information in DNA is read in groups of three nucleotides (each group is called a codon), and each codon specifies a particular amino acid. Sometimes more than one codon can specify the same amino acid. For example in the top left corner, the table shows that UUU and UUC both specify the amino acid phenylalanine.

The nature of the code is such that it matters where the first codon begins — the first codon to be read establishes the codon groupings going forward. In the table above the “start” codon is AUG (it also specifies the amino acid methionine). The sequence of codons is “read” by a cellular machine called the ribosome, which starts reading the RNA message at AUG, and then proceeds three nucleotides at a time to translate the message into amino acids. In the sequence below, for example, the first codon to be read would be AUG and that codon determines the frame in which of all the other codons are read.

AUG GCA UGC ACC GGC AUU AGU

Now here’s where it gets interesting. Potentially, DNA can be grouped into different codons, or frames, depending on where the ribosome starts reading. See below for an illustration. For example, the sequence could potentially be read with the groupings shown in frame one (ATG GCA etc.) or frame two (TGG CAT etc., if a proper ATG exists somewhere upstream), leading ultimately to a different amino acid sequences for each. In fact there are six possible ways to group the DNA into codons — three frames on the sense strand going left to right (labeled 1-3), and three frames on the antisense strand (labeled 4-6), going right to left. Below I have laid out the six possible frames for the sequence we began with, but with the alternate frames staggered, and the alternate codons separated by spaces. Notice the sequence stays the same — the only thing that changes from frame to frame is how the nucleotides are grouped. It’s the same sequence, but it could be read and translated differently in each frame. This is because each codon specifies a particular amino acid. Thus, each frame results in a completely different string of amino acids.

frame 1  ATG GCA TGC ACC GGC ATT AG
frame 2   TGG CAT GCA CCG GCA TTA G
frame 3    GGC ATG CAC CGG CAT TAG

frame 4  TAC CGT ACG TGG CCG TAA TC
frame 5   ACC GTA CGT GGC CGT AAT C
frame 6    CCG TAC GTG GCC GTA ATC

The codons TAA, TAG, and TGA are stop codons — they specify where the gene ends and protein translation stops. (For extra credit, can you find any ATG or stop codons in the above frames? They are there in both the forward and reverse direction. For more extra credit, can you use the code table to translate different frames, and demonstrate that each frame encodes a different protein?)

So when Venema and others say that nylonase arose by a frameshift mutation that produced a novel protein 392 amino acids long, they are claiming that a completely new coding sequence with frame-shifted codons could generate a functional protein. How likely is that? Not very, given the rarity of functional proteins in sequence space (see my first post). And, as I have already shown in my first post, such an unlikely hypothesis is unnecessary. The nylB gene appears to be the product of a simple gene duplication followed by two stepwise mutations to increase nylonase activity.

There is something special about the nylonase gene’s sequence though, something very odd. nylB has multiple large, overlapping (alternate) open frames that lack stop codons.

How hard is it to get a gene with multiple reading frames?

Let me explain. Roughly one in twenty codons are stop codons. A random DNA sequence will have stop codons about every sixty bases, and may or may not have a start codon. Usually the alternate frames of DNA sequences are interrupted by stop codons. Only the frame that actually specifies the correct gene will have no stop codons at all over a significant length. This system is actually very ingenious. The one frame that needs to be read and translated is identified by an ATG. The other frames will usually lack an ATG and/or will have several stop codons that interrupt their translation, thus preventing the cell from wasting energy on nonsense transcripts.

According to the nylonase story, as told by Ohno and Venema and numerous others, a new ATG start codon was formed by the insertion of a T between an A and G, thus creating a new start codon after the original ATG, which shifted the reading frame for that sequence to that specified by the new ATG, and creating a completely different coding sequence and thus a new protein. Let us grant that scenario for the sake of argument. Normally such a shift would produce a new coding sequence that would be interrupted by stop codons, so the newly frameshifted protein would be truncated. Thus the only reason this frameshift hypothesis for nylonase is even remotely possible is because the sequence coding for nylonase is most unusual, and contains not one, not two, but three open frames Although frameshift mutations are ordinarily considered to be quite disruptive, at least in this case the putative brand new protein sequence would not terminate early due to stop codons.

My point? The first step to getting a new functional protein of any length from a frameshift is to avoid stop codons. The odds of a random coding sequence having an open alternate frame, without stops, are poor. As a consequence, if a protein does have an open frame in addition to its coding sequence, it’s worth paying attention to. And it so happens that nylonase does have more than one open frame. The DNA sequence above illustrates the six frames, numbering them frames 1 through 6. Using that convention, frames 1 and 3 are read from the sense strand. Both have no stop codons over the length of the gene in the sense direction. Frame 4 on the antisense direction has no stop codons either. Frame 1 is the coding frame that specifies the nylonase protein, otherwise known as the open reading frame (ORF). It is defined by the presence of both a start and stop codon. The other two frames have no start codons or stop codons, so I’ll call them non-stop frames (NSFs). They are frames 3 and 4.

The probability of a DNA sequence with an ORF on the sense strand and 2 NSFs is very small. Just exactly how small are the chances of avoiding a stop codon in three out of six frames? We set out to determine that by performing a numerical simulation using pseudorandom numbers to generate sequences at various levels of GC content. (By we I mean that my husband, Patrick Achey, who is an actuary, did the programming work, while I determined the parameters.) We chose to vary the GC content because sequences with a higher GC content have fewer stop codons. Remember, a stop codon always has an A and a T (TAA, TAG, and TGA are the stop codons) so having a sequence with a lower percentage of AT content will reduce the frequency of stop codons. Conversely, higher GC content makes the chances of avoiding stop codons and getting longer ORFs much greater, thus also increasing the chances of NSFs. The genomes of bacteria vary in their GC content, from less than 20 percent to as much as 75 percent, though the reason why is not known. One species of Flavobacterium has a genome with about 32 percent GC and 2400 genes — the precise values varies with the strain. The plasmid on which nylB resides is very different. It has 65 percent GC content. The gene encoding nylonase has an even higher 70 percent GC content, which is near the observed bacterial maximum of 75 percent.

We chose to use a target ORF size of 900 nucleotides (or 300 amino acids) because it is an average size for a functional protein. Nylonase is 392 amino acids long; the small domain of beta lactamase, the enzyme my colleague Doug Axe studied, is about 150 amino acids long. The median length for an E. coli protein is 278 amino acids; for humans, the median length is 375.

As expected, the simulation showed that the higher the GC content, the greater the likelihood that ORFs that are 900+ nucleotides long exist. At 50 percent GC, the average ORF length we obtained was about 60 nucleotides; most ORFs terminate well before 900 nucleotides. Indeed, in our simulation only two out of a million random sequences made it to 900 nucleotides before encountering a stop codon. As a result, we could not determine the rarity of NSFs at 50 percent GC — we would probably have to run the simulation for more than a billion trials to get any significant number of NSFs at all.

Sequences at 60 percent GC gave 57 ORFs at least 900 nucleotides long out of a million trials, while sequences at 65 percent GC produced 404 out of a million, one of which also had an NSF.

NSFs were much more probable for sequences that were 70 percent GC, like nylB. In our simulation 3,021 out of a million trials were ORFs at least 900 nucleotides long. That’s a frequency of .3 percent. Of those 3,021 ORFs, 86 had 1 NSF, and none had 2 NSFs. We had to run 10 million trials at 70 percent GC to see any ORFs with 2 NSFs. From those 10 million randomly generated sequences, we obtained 28,603 ORFs; 903 had 1 NSF and only 9 had 2 NSFs.

Interestingly, at 80 percent GC we got a few sequences with 4 NSFs; but I don’t know of any bacterium with a GC content that high.

Our simulation shows that multiple NSFs are very rare. The probability that an ORF 900 nucleotides long with 70 percent GC content will have two NSFs is 9 out of 28,603, or 0.0003. If these figures are recast to include the total number of trials required to get an ORF of that length and GC content and with 2 NSFs, the probability would be 9 out of 10,000,000 trials.

A sequence like nylB is very rare. In fact, I suspect that for all cases where overlapping genes exist, in other words where alternate frames from the same sequence have the potential to code for different proteins, unusual sequence will necessarily be found. Likely it will be high in GC content. Could such rare sequences be accidental? I think that if we compare the expected number of alternate or overlapping NSFs per ORF, with the actual number we will find that there are more of these alternate open reading frames than would be predicted by chance.

From another study of overlapping genes:

Thus, bacterial genomes contain a larger number of long shadow ORFs [ORFs on alternate frames] than expected based on statistical analysis. Random mutational drift would have eliminated the signal long ago, if no selection pressures were stabilizing shadow ORFs. Deviations between the statistical model and bacterial genomes directly call for a functional explanation, since selection is the only force known to stabilize the depletion of stop codons. Most shadow genes have escaped discovery, as they are dismissed as false positives in most genome annotation programs. This is in sharp contrast to many embedded overlapping genes that have been discovered in bacteriophages. Since phages reside in a long term evolutionary equilibrium with the bacterial host genome, we suggest that overlooked shadow genes also exist in bacterial genomes.
Indeed, a study of the pOAD2 plasmid from which nylB came indicates that there are potentially many overlapping genes on that plasmid. nylB′, for example, a homologous gene on the same plasmid that differs by 47 amino acids from nylB, also has 2 NSFs. These unusual and unexpected features of DNA have consequences for how we think about the origin of information in DNA sequences, as I shall discuss in the next post.

An extrapolation revisited.

The Nylonase Story: When Imagination and Facts Collide
Ann Gauger

Editor’s note: Nylon is a modern synthetic product used in the manufacturing, most familiarly, of ladies’ stockings but also a range of other goods, from rope to parachutes to auto tires. Nylonase is a popular evolutionary icon, brandished by theistic evolutionist Dennis Venema among others. In a series of three posts, Discovery Institute biologist Ann Gauger takes a closer look.

A significant problem for the neo-Darwinian story is the origin of new biological information. Clearly, information has increased over the course of life’s history — new life forms appeared, requiring new genes, proteins, and other functional information. The question is — how did it happen? This is the central question concerning the origin of living things.

Stephen Meyer and Douglas Axe have made this strong claim:

[T]he neo-Darwinian mechanism — with its reliance on a random mutational search to generate novel gene sequences — is not an adequate mechanism to produce the information necessary for even a single new protein fold, let alone a novel animal form, in available evolutionary deep time.
Their claim is based on the experimental finding  by Doug Axe that functional protein folds are exceedingly rare, on the order on 1 in 10 to the 77th power, meaning that all the creatures of the Earth searching for the age of the Earth by random mutation could not find even one medium-size protein fold.

In contrast, Dennis Venema, professor of biology at Trinity Western University, claims in his book Adam and the Genome and in posts at the BioLogos website that getting new information is not hard. In his book, he presents several examples he thinks demonstrate the appearance of new information — the apparent evolution of new protein binding sites, for example. But the best way to reveal Axe and Meyer’s folly, he thinks, (and says so in his book and  a post at BioLogos) would be to show that a genuinely “new” protein can evolve.

…[E]ven more convincing… would be an actual example of a functional protein coming into existence from scratch — catching a novel protein forming “in the act” as it were. We know of such an example — the formation of an enzyme that breaks down a man-made chemical.

In the 1970s, scientists made a surprising discovery: a bacterium that can digest nylon, a synthetic chemical not found in nature. These bacteria were living in the wastewater ponds of chemical factories, and they were able to use nylon as their only source of food. Nylon, however, was only about 40 years old at the time — how had these bacteria adapted to this novel chemical in their environment so quickly? Intrigued, the scientists investigated. What they discovered was that the bacteria had an enzyme (which they called “nylonase”) that effectively digested the chemical. This enzyme, interestingly, arose from scratch as an insertion mutation into the coding sequence of another gene. This insertion simultaneously formed a “stop” codon early in the original gene (a codon that tells the ribosome to stop adding amino acids to a protein) and formed a brand new “start” codon in a different reading frame. The new reading frame ran for 392 amino acids before the first “stop” codon, producing a large, novel protein. As in our example above, this new protein was based on different codons due to the frameshift. It was truly “de novo” — a new sequence.
Venema is right. If the nylonase enzyme did evolve from a frameshifted protein, it would genuinely be a demonstration that new proteins are easy to evolve. It would be proof positive that intelligent design advocates are wrong, that it’s not hard to get a new protein from random sequence. But the story bears reexamining. Is the new protein really the product of a frameshift, or did it pre-exist the introduction of nylon into the environment? What exactly do we know about this enzyme? Does the evidence substantiate the claims of Venema and others, or does it lead to other conclusions?

First, some history. In the 1970s Japanese scientists discovered that certain bacteria had developed the ability to degrade the synthetic polymer nylon. Okada et al. identified three enzymes responsible for nylon degradation, and named them EI, EII, and EIII. The genes that encoded them were named nylA, nylB, and nylC. They sequenced the plasmid on which the genes were found, and discovered that there was another gene on the same plasmid that was very similar to nylB; they named it nylB′. (We will focus on the story of nylB and nylB′ because they are the ones relevant to Venema’s story.)

So far all I have given you are the facts. Now here’s the interpretation of these facts. Some claimed that the nylonase enzyme, as it was called, had originated some time after people began making nylon (in the 1930s). That seemed plausible because nylonase was unable to degrade naturally occurring amide bonds — it could degrade only the amide bonds in nylon — and so had not existed previously, it was thought. The popular conclusion was that the nylonase activity evolved in response to the presence of nylon in the environment, and thus was only forty years old. And here’s the big interpretive leap: it must not be hard to get new enzymes if a new one can evolve within a period of forty years.

Okada et al. had sequenced the genes encoding nylB and nylB′. They concluded that the nylonase activity was the result of a gene duplication followed by several mutations to the nylB gene. But at this point Susumu Ohno, an eminent molecular geneticist and evolutionary biologist, noticed something unusual about the nylB gene sequence (Ohno, 1984). Ohno had a theory that DNA with repeats of the right kind had the potential to code for protein in multiple frames, with no interrupting stop codons, and might thus be a source for “new” proteins. (If you are unfamiliar with the terms I just used, I invite you to take a look at my post tomorrow, where I will explain the necessary concepts. For those already familiar, I present some relevant data concerning the rarity of sequences that can be frameshifted.)

Ohno noticed that nylB, the gene for nylonase, might originally have encoded something else if a certain T was removed. The nylonase gene as it exists now has 1179 bases, which encode a 392 amino acid protein. Without a particular T embedded in the ATG start codon, though, the sequence would have specified a hypothetical original gene with a longer open reading frame (ORF) of 427 amino acids, in a different frame. Thus, Ohno proposed a “new” protein with a new function acting on a new substrate was born when a T inserted in between a particular A and G in the DNA, making a new ATG start codon and shifting the frame to code for a new protein, the protein we now call nylonase.

Ingenious. According to Ohno, nylonase could be a new enzyme, appearing suddenly with no known precursors via a sudden frameshift. (Note that all of this assumes that new protein folds are easy to get.) Ohno published this hypothesis in the Proceedings of the National Academy of Sciences. It was a hypothesis only, however, as a careful reading of his paper shows. One heading, for example:

R-IIA Coding Sequence [nylB] for 6-AHA LOH [nylonase] Embodies an Alternative, Longer Open Reading Frame That Might Have Been the Original Coding Sequence [Emphasis added.]
and the text says:

I suggest that the RS-IIA base sequence [nylB] was originally a coding sequence for an arginine-rich polypeptide chain 427 or so residues long in its length and that the coding sequence for one of the two isozymic forms of 6-ALA LOH [nylonase] arose from its alternative open reading frame. [Emphasis added.]
Ohno presented arguments for why his suggestion was plausible, but did not provide evidence that the “original” gene ever existed or was used (in fact he says it was unlikely to be useful based on its amino acid composition), or that the insertion ever happened. Nonetheless, the frame-shift hypothesis for the origin of nylonase has been widely proclaimed as fact (though, notably, not by Okada et al. who have done most of the work).

If the nylonase story as told above were true, namely that a frameshift mutation resulted in the de novo generation of a new protein fold with a new function, it would indeed constitute a substantial refutation to Meyer and Axe’s claim. If a frame-shift mutation can produce a random new open reading frame in real, observable time, and give rise to a new functional enzyme, then it must not be that hard to make new functional protein folds. In other words, functional protein folds must not be rare in sequence space. And therefore Stephen Meyer’s arguments about the difficulty of getting enough new biological information to generate a new fold must be wrong as well. Venema flatly asserts:

If de novo protein-coding genes such as nylonase can come into being from scratch, as it were, then it is demonstrably the case that new protein folds can be formed by evolutionary mechanisms without difficulty….[I]f Meyer had understood de novo gene formation — as we have seen, he mistakenly thought it was an unexplained process — he would have known that new protein folds could indeed be easily developed by evolutionary processes.
Slam dunk, right?

A little caution in accepting this story without hard evidence would be wise. In genetics we are taught that frame-shift mutations are extremely disruptive, completely changing the coding sequence and resulting in truncated nonsense. In fact, one term for a frameshift mutation is “nonsense mutation.” A biologist’s basic intuition should be that frameshifts are highly unlikely to produce something useful. The only reasons for the widespread acceptance of Ohno’s hypothesis that I can come up with are the unusual character of the sequence itself, Ohno reputation as a brilliant scientist (which he was), and wish-fulfillment on the part of some evolutionary biologists.

Fortunately, science marches on, and evidence continues to accumulate. The same group of Japanese scientists continued their study of the nylonase genes. nylB appeared to be the result of a gene duplication of nylB′ that occurred some time ago. EII′ (the enzyme encoded by nylB′) had very little nylonase activity, while EII (the enzyme encoded by nylB) was about 1000 fold higher in activity. The two enzymes differed in amino acid sequence at 47 positions out of 392. With some painstaking work, the Japanese determined that just two mutations were sufficient to convert EII′ to the EII level of activity.

They then obtained the three-dimensional structure of an EII-EII′ hybrid protein. And with those results everything changed — or should have.

Here’s what Venema takes from the paper and interprets the evidence:

…the three-dimensional structure of the protein has been solved using X-ray crystallography, a method that gives us the precise shape of the protein at high resolution. Nylonase is chock full of protein folds— exactly the sort of folds Meyer claims must be the result of design because evolution could not have produced them even with all the time since the origin of life. [Emphasis added.]
Unfortunately, Venema doesn’t have the story straight. Nylonase has a particular fold, a particular three-dimensional, stable shape. Most proteins have a distinct fold — there are several thousand kinds of folds known so far, each with a distinct topology and structure. Folds are typically made up of small secondary structures called alpha helices and beta strands, which help to assemble the tertiary structure — the fold as a whole. Venema seems unclear about what a protein fold is, and the distinction between secondary and tertiary structures. Nylonase is not “chock full of folds.” No structural biologist would describe nylonase as “chock full of protein folds.” Indeed, no protein is “chock full of folds.” Perhaps Venema was referring to the smaller units of secondary structure I mentioned above, the alpha helices or beta strands. But it would appear he doesn’t know what a protein fold is.

Maybe that explains why Venema missed the essential point of the paper describing nylonase’s structure. The crystal structure of EII-EII’ (a nylonase hybrid necessary to be able to crystalize the protein) revealed that it is not a new kind of fold, but a member of the beta-lactamase fold family. More specifically, it resembles carboxylesterases, a subgrouping of that family. In addition, when the scientists checked EII′ and EII, they found that both enzymes had previously undetected carboxylesterase activity. In other words, the EII’ and EII enzymes were carboxylesterases. If it looks like a duck and quacks like a duck, it is a duck.

Thus, EII′ and EII did not have frameshifted new folds. They had pre-existing folds with activity characteristic of their fold type. There was no brand-new protein. No novel protein fold had emerged. And no frameshift mutation was required to produce nylonase.

Where did the nylon-eating ability come from? Carboxylesterases are enzymes with broad substrate specificities; they can carry out a variety of reactions. Their binding pocket is large and can accommodate a lot of different substrates. They are “promiscuous” enzymes, in other words. Furthermore, the carboxylesterase reaction hydrolyzes a chemical bond similar to the one hydrolyzed by nylonase. Tests revealed that both the EII and EII′ enzymes have carboxylesterase and nylonase activity. They can hydrolyze both substrates. In fact it is possible both had carboxylesterase activity and a low level of nylonase activity from the beginning, even before the appearance of nylon.

nylB′ may be the original gene from which nylB came. Apparently there was a gene duplication at some point in the past. The two genes appear to have acquired mutations since then — they differ by 47 amino acids out of 392. The time of that duplication is unknown, but not recent, because it takes time to accumulate that many mutations. However, at least some of those mutations must confer a high level of nylonase activity on EII, the enzyme made by nylB. The enzyme EII’ made by nylB’ has only a low ability to degrade nylon, while EII degrades nylon 1000 fold better. So one or more of those 47 amino acid differences must be the cause of the high level of nylonase activity in EII. Through careful work, the Japanese workers Kato et al. identified which amino acid changes were responsible for the increased nylonase activity. Just two step-wise mutations present in EII, when introduced into EII’, could convert the weak enzyme EII’ to full nylonase activity.

From Kato et al. (1991):

Our studies demonstrated that among the 47 amino acids altered between the EII and EII’ proteins, a single amino acid substitution at position 181 was essential for the activity of 6-aminohexanoate-dimer hydrolase [nylonase] and substitution at position 266 enhanced the effect.
So. This is not the story of a highly improbable frame-shift producing a new functional enzyme. This is the story of a pre-existing enzyme with a low level of promiscuous nylonase activity, which improved its activity toward nylon by first one, then another selectable mutation. In other words this is a completely plausible case of gene duplication, mutation, and selection operating on a pre-existing enzyme to improve a pre-existing low-level activity, exactly the kind of event that Meyer and Axe specifically acknowledge as a possibility, given the time and probabilistic resources available. Indeed, the origin of nylonase actually provides a nice example of the optimization of a pre-existing fold’s function, not the innovation or creation of a novel fold.

As the scientists who carried out the structural determination for nylonase themselves note:

Here, we propose that amino acid replacements in the catalytic cleft of a preexisting esterase with the beta-lactamase fold resulted in the evolution of the nylon oligomer hydrolase. [Emphasis added.]
Let’s put to bed the fable that the nylon oligomer hydrolase EII, colloquially known as nylonase, arose by a frame-shift mutation, leading to the creation of a new functional protein fold. There is absolutely no need to postulate such a highly improbable event, and no justification for making this extravagant claim. Instead, there is a much more parsimonious explanation — that nylonase arose by a gene duplication event some time in the past, followed by a series of two mutations occurring after the introduction of nylon into the environment, which increased the nylon oligomer hydrolase activity of the nylB gene product to current levels. Could this series of events happen in forty years? Most certainly. Probably in much less time. In fact, it has been reported to happen in the lab under the right selective conditions. And most definitely, the evolution of nylonase does not call for the creation of a novel protein fold, nor did one arise. EII’s fold is part of the carboxylesterase fold family. Carboxylesterases serve many functions and have been around much longer than forty years.


Douglas Axe and Stephen Meyer readily admit that this kind of evolutionary adaptation happens easily. A protein that already has a low level of activity for a particular substrate can be mutated to favor that side reaction over its original one, often in just a few steps. There are many cases of this in the literature. What Axe and Meyer do claim is that generating an entirely new protein fold via mutation and selection is implausible in the extreme. Nothing in the nylonase story that Dennis Venema tells shows otherwise.

Why attempting to design the undesignable remains a fools errand.

Why Evolution Simulations Fail: Author of Evolutionary Informatics Book Explains
Evolution News @DiscoveryCSC

f you search for the phrase “evolution simulation” in Google, you’ll get many hits. Come to think of it, computer evolution simulations are an evolutionary icon. What of them? Do they falsify the claims of intelligent design theory?On a new episode of ID the Future, Ray Bohlin takes up the issue with Dr. Winston Ewert, co-author with William Dembski and Robert Marks II of a new book,  An Introduction to Evolutionary Informatics.

Ewert argues that Richard Dawkins’s “Methinks It Is Like a Weasel” simulation doesn’t prove biological evolution and isn’t even very interesting. Ewert says there are some interesting computer evolution simulations, but he explains that they fail to model anything biologically realistic.

Instead they set up a straw man version of intelligent design, and simultaneously sneak teleology in, which kind of defeats the purpose.  Download the podcast here, or listen to it here .

Dr. Ewert’s book is getting raves from some impressive scientists, including star mathematician Gregory Chaitin, author of Proving Darwin: Making Biology Mathematical. He calls the book, “An honest attempt to discuss what few people seem to realize is an important problem.”

Speaking of Chaitin, says Dr. Bijan Nemati of the Jet Propulsion Laboratory and Caltech:

With penetrating brilliance, and with a masterful exercise of pedagogy and wit, the authors take on Chaitin’s challenge, that Darwin’s theory should be subjectable to a mathematical assessment and either pass or fail. Surveying over seven decades of development in algorithmics and information theory, they make a compelling case that it fails.
Congratulations, Dr. Ewert, Dr. Marks, and Dr. Dembski! Get your copy now.

Wednesday 3 May 2017

Wanted:a theory of devolution.

Crime and Punishment, and Darwin's Theory


Species passed into history seven years ago. In the years that followed 1859, the impact of evolutionary thinking seeped across the culture of Europe and America. For years to come, we'll be tracing a series of century-and-a-half anniversaries of the effects of that seepage, and reflections on it as it was happening. This year, among other things, it's the publication of Dostoyevsky's Crime and Punishment (1866). 
In The New Criterion, Gary Saul Morson writes on "The disease of theory: 'Crime & Punishment' at 150." By "disease of theory" he means something recognizable from our contemporary culture:
The decade after [Tsar Alexander II] ascended the throne witnessed the birth of the "intelligentsia," a word we get from Russian, where it meant not well-educated people but a group sharing a set of radical beliefs, including atheism, materialism, revolutionism, and some form of socialismIntelligents (members of the intelligentsia) were expected to identify not as members of a profession or social class but with each other. They expressed disdain for everyday virtues and placed their faith entirely in one or another theory. Lenin, Trotsky, and Stalin were typical intelligents....
The intelligentsia prided itself on ideas discrediting all traditional morality. Utilitarianism suggested that people do, and should do, nothing but maximize pleasure. Darwin's Origin of Species, which took Russia by storm, seemed to reduce people to biological specimens. In 1862 the Russian neurologist Ivan Sechenov published his Reflexes of the Brain, which argued that all so-called free choice is merely "reflex movements in the strict sense of the word." And it was common to quote the physiologist Jacob Moleschott's remark that the mind secretes thought the way the liver secretes bile. These ideas all seemed to converge on revolutionary violence.
The hero of Crime and Punishment, Rodion Raskolnikov, discusses disturbances then in progress, including the radicals' revolutionary proclamations and a series of fires they may have set. But by nature he is no bloodthirsty killer. Quite the contrary, he has an immensely soft heart and is tortured by the sight of human suffering, which he cannot and refuses to get used to. "Man gets used to everything, the scoundrel!" he mutters, but then immediately embraces the opposite position: "And what if I'm wrong . . . what if man is not really a scoundrel . . . then all the rest is prejudice, simply artificial terrors and there are no barriers and it's all as it should be."...He means that man cannot be a "scoundrel" because that is a moral category, and morality is simply "artificial terrors" imposed by religion and sheer "prejudice." There is only nature, and nature has causes, not moral purposes. It follows that all is as it should be because if moral concepts are illusions then things just are what they are. [Emphasis added.]
More:
The questions this masterpiece poses still haunt us, perhaps even more than when it first appeared. Revolution still attracts. "New atheists" and stale materialists advance arguments that were crude a hundred fifty years ago. Social scientists describe human decisions in absurdly simplistic terms. Our intelligentsia entertains theory after theory elevating them above the ordinary people they would control. Morality is explained away neurologically, sociobiologically, or as mere social convention.
My goodness, since Dostoyevsky documented the toxin of "theories," how little has changed. 
Except that 150 years ago there were still abundant great men in defense of the view opposite to materialism, while our own contemporaries, even the ones with their heart in the right place, seem increasingly diminutive in stature. The difference made in just a couple of decades -- a mere generation, the passage from father to son -- is remarkable. Unthinking surrender to the most prestigious theories, or evading serious confrontation with them, is now the order of the day. What we need is a theory not of evolution but of devolution.

Tuesday 2 May 2017

Insect navigator from down under Vs. Darwinism

A Monarch-Like Wonder from Mountains Down Under
Evolution News & Views

There's a little gray moth in Australia that does something extraordinary. Like the Monarch butterfly of North America, it migrates over long distances. Unlike the Monarch, it flies at night. And it doesn't even need to.

Current Biology describes this dull-colored little wonder, called the Bogong moth, as the "nocturnal counterpart of the migratory Monarch butterfly." Its summer home is as amazing as the mountain forests of Mexico where the Monarchs were discovered.

If you ever have the chance of hiking the Australian Alps in summer, you will find an ancient and beautiful mountain range. The grassy, treeless peaks, polished aeons ago by glaciers, are littered with countless granite boulders of all shapes and sizes. If you are not claustrophobic and dare to climb into one of the crevices formed by these rocky ensembles, your breath will be taken away, first by the dense clouds of ultra-fine, silvery dust drawn to your face by swift air currents channelled through the rock chimneys, and then by the sight of the source of the dust: hundreds of thousands of Bogong moths, neatly tiling the cave walls. In fact, there are about 17,000 of them per square meter, but you will only find them by chance if you are very lucky. This is because we only know of a handful of such caves, and the moths are present there only for four months during the height of the Australian summer. [Emphasis added.]

These moths were a source of food for aboriginal people, who found them in the mountain plains each summer. It took scientists more recent study to discover the rest of their "remarkable and interesting" tale: that they migrate a thousand kilometers from southern Queensland to these mountain caves each year. Here's how they outperform the Monarchs as navigators:

All this makes the Bogong moth, in many respects, similar to the iconic North American Monarch butterfly Danaus plexippus, except that it is a night-active species and therefore cannot use the sun for orientation. And unlike the Monarch butterfly, where the full forward and reverse migrations are performed by several generations, individual Bogong moths perform both migrations. If you think of the Monarch butterfly as the King of insect migration, the Bogong moth is certainly insect migration's Dark Lord.

Scientists don't know how they find their way without sunlight. Monarchs are known to use the polarization of light as it changes throughout the day; in fact, our neighbors at the University of Washington believe they have figured out the secret of the Monarchs' internal compass at long last. But Bogongs only have the moon, the stars and the earth's magnetic field to provide visual cues. While these might guide them in the basic direction, what leads them specifically to the caves?

The Bogong moth's journey can thus be divided into a long-distance part and a final travel segment that lets them locate their specific target site. As the two parts operate on very different spatial scales, the mechanisms employed, and the information used, are likely not identical. To find their caves, Bogong moths might, for example, use their sense of smell and be attracted to the carcasses of those family members that were not fit enough for last year's return trip.

Let's see if authors Stanley Heinze and Eric Warrant can provide a Darwinian explanation. "Given the lengthy, difficult, and often lethal journey, there must be substantial selection pressure driving these animals along their migratory cycle," evolutionary theory would expect. "Nevertheless, and again similar to the Monarch butterfly, not all populations of Bogong moths are migratory." In fact, they say, there are non-migrating populations of Bogongs at both ends of the route and in other places. This defies evolutionary expectations so clearly, the authors never return to the question of what selection pressures might possibly create this remarkable behavior. They only mention additional examples of insects with mixed populations of migrants and non-migrants, confessing that "the migratory movements of these species are either erratic or poorly understood."

The performance of the little night-flying Bogong moth is enough, by contrast, to generate rhapsodic praise:

Bogong moths pinpoint a tiny mountain cave from over a thousand kilometres away, crossing terrain they have never crossed previously, and locating a place they have never been to before. Moreover, they do all this at night, fuelled by a few drops of nectar and using a brain the size of a grain of rice. Don't even ask an engineer if they could build a robot equivalent! To achieve this remarkable behaviour, the moth brain has to integrate sensory information from multiple sources and compute its current heading relative to an internal compass. It then has to compare that heading to its desired migratory direction and translate any mismatch into compensatory steering commands, while maintaining stable flight in very dim light while buffeted by cold turbulent winds.

On top of all that, the moth has to switch all its computations to the opposite direction come autumn, and reverse all its learned behaviors. "Its simple nervous system and its fixed, reproducible behaviour stand in stark contrast to the complexity of the problem that the Bogong moth must solve."

Here's where intelligent design can make a contribution. Because science can employ "electrophysiology, neuroanatomy, and behavioural analysis" to study this stable population, we can bestow upon these insects a better reputation than accidental products of bind selection pressures. We can, instead, reverse engineer the software that takes neural circuits that underlie "nocturnal vision, sensory integration, motor control, action selection and state-dependent changes of behaviour." As a result of design thinking, we might even be able to apply the knowledge gained to our own designed systems.

Seeing in the Dark

A related paper in Current Biology examines the question of how moths see to fly at night. Here's the upshot:

A new study shows that moth vision trades speed and resolution for contrast sensitivity at night. These remarkable neural adaptations take place in the higher-order neurons of the hawkmoth motion vision pathway and allow the insects to see during night flights.

Author Petri Ala-Laurila waxes eloquent about the difficulty of operating in the dark.

Seeing under very dim light poses a formidable challenge for the visual system. In these conditions, visual signals originating in a small number of photoreceptor cells have to be detected against neural noise originating in a much larger number of such cells, as well as in the neural circuitry processing these sparse signals. The randomness of rare photon arrivals makes it even harder to form reliable visual percepts in dim light. Yet many species show remarkable visual capabilities at extremely low light levels.

We are on that list; "dark-adapted humans can detect just a few light quanta absorbed on a small region of the peripheral retina." Nevertheless, hawkmoths are experts at deriving the most from the least, along with cockroaches, dung beetles, toads and Central American sweat bees. "In all of these cases, the striking behavioral performance of animals in dim light exceeds that of individual receptor cells at their visual inputs by orders of magnitude."

The secret, the author explains, is in the processing. Take what you have, pool it, and boost it. Summing the inputs adds clarity over time. "In our own retina, rod photoreceptors used mainly at low light levels have a longer integration time than cone photoreceptors that we use in daytime," Ala-Laurila says. "This is one example of receptor-level temporal summation."

There are tradeoffs, however; "Unfortunately, there is no free lunch -- especially not in biology." Pooling and boosting adds noise, lowers resolution, and takes longer to compute. Imagine a moth flying in the dark, darting rapidly to avoid predators. You would think it needs a high-speed visual computer to do what it does. "Balancing sensitivity against acuity and speed is a trade-off problem where the optimal solution depends on light level and motion velocity." Remember back when we talked about optimization as an example of intelligent design science in action?

Ala-Laurila points to a study that quantified the amount of summation going on in the hawkmoth's brain and eyes. How the scientists did that is quite a trick, but they found out that the summation circuitry gives the moth a hundredfold boost in sensitivity, using nonlinear processing. A dim image, therefore, becomes quite bright as the scientists show in a comparison between original and processed images.

Another study we discussed last year showed that the moth's behavior is perfectly tuned to the motions of the flowers they seek at night for nectar. "These two studies together suggest that the neural mechanisms of the moth visual system have been matched perfectly to the requirements of its environment." What luck for Darwinian selection to get mutations in both systems to match up perfectly! Evolution is beautiful.

Similarly, it will be intriguing to understand the mechanisms that control the optimal tuning of spatial and temporal properties across multiple light levels in the moth. Recent studies have unraveled neural circuit mechanisms underlying luminance-dependent changes in the spatial summation of the vertebrate retina. Further mechanistic understanding of evolution as an innovator at visual threshold might even help us to build more sensitive and efficient night vision devices in the future. Aside from these potential future innovations, this study reveals above all some of the key neural secrets underlying the night flight of a moth in the wilderness. This understanding as such is simply beautiful.

Evolutionists at the University of Basel are even claiming that Darwinian evolution is helping moths adapt to city life by making them avoid bright lights. One can always invent a story about how blind processes achieve perfection, but returning to reality, we know design when we see it. Whether the Monarchs shown in Illustra Media's documentary Metamorphosis: The Beauty and Design of Butterflies or the night navigators described here (Bogong moth and hawkmoth), we just need to recognize what it points to.


There is "no free lunch -- especially not in biology." Aimless natural processes are woefully inadequate to deliver precision guided systems. Intelligence, by contrast, provides a feast for understanding.

Why slain myths become undead rather than stay buried.

Who Will Debunk The Debunkers?
By Daniel Engber


In 2012, network scientist and data theorist Samuel Arbesman published a disturbing thesis: What we think of as established knowledge decays over time. According to his book “The Half-Life of Facts,” certain kinds of propositions that may seem bulletproof today will be forgotten by next Tuesday; one’s reality can end up out of date. Take, for example, the story of Popeye and his spinach.

Popeye loved his leafy greens and used them to obtain his super strength, Arbesman’s book explained, because the cartoon’s creators knew that spinach has a lot of iron. Indeed, the character would be a major evangelist for spinach in the 1930s, and it’s said he helped increase the green’s consumption in the U.S. by one-third. But this “fact” about the iron content of spinach was already on the verge of being obsolete, Arbesman said: In 1937, scientists realized that the original measurement of the iron in 100 grams of spinach — 35 milligrams — was off by a factor of 10. That’s because a German chemist named Erich von Wolff had misplaced a decimal point in his notebook back in 1870, and the goof persisted in the literature for more than half a century.

By the time nutritionists caught up with this mistake, the damage had been done. The spinach-iron myth stuck around in spite of new and better knowledge, wrote Arbesman, because “it’s a lot easier to spread the first thing you find, or the fact that sounds correct, than to delve deeply into the literature in search of the correct fact.”

Arbesman was not the first to tell the cautionary tale of the missing decimal point. The same parable of sloppy science, and its dire implications, appeared in a book called “Follies and Fallacies in Medicine,” a classic work of evidence-based skepticism first published in 1989.1 It also appeared in a volume of “Magnificent Mistakes in Mathematics,” a guide to “The Practice of Statistics in the Life Sciences” and an article in an academic journal called “The Consequence of Errors.” And that’s just to name a few.

All these tellings and retellings miss one important fact: The story of the spinach myth is itself apocryphal. It’s true that spinach isn’t really all that useful as a source of iron, and it’s true that people used to think it was. But all the rest is false: No one moved a decimal point in 1870; no mistake in data entry spurred Popeye to devote himself to spinach; no misguided rules of eating were implanted by the sailor strip. The story of the decimal point manages to recapitulate the very error that it means to highlight: a fake fact, but repeated so often (and with such sanctimony) that it takes on the sheen of truth.

In that sense, the story of the lost decimal point represents a special type of viral anecdote or urban legend, one that finds its willing hosts among the doubters, not the credulous. It’s a rumor passed around by skeptics — a myth about myth-busting. Like other Russian dolls of distorted facts, it shows us that, sometimes, the harder that we try to be clear-headed, the deeper we are drawn into the fog.


No one knows this lesson better than Mike Sutton. He must be the world’s leading meta-skeptic: a 56-year-old master sleuth who first identified the myth about the spinach myth in 2010 and has since been working to debunk what he sees as other false debunkings. Sutton, a criminology professor at Nottingham Trent University, started his career of doubting very young: He remembers being told when he was still a boy that all his favorite rock stars on BBC’s “Top of the Pops” were lip-synching and that some weren’t even playing their guitars. Soon he began to wonder at the depths of this deception. Could the members of Led Zeppelin be in on this conspiracy? Was Jimmy Page a lie? Since then, Sutton told me via email, “I have always been concerned with establishing the veracity of what is presented as true, and what is something else.”

As a law student, Sutton was drawn to stories like that of Popeye and the inflated iron count in spinach, which to him demonstrated both the perils of “accepted knowledge” and the importance of maintaining data quality. He was so enamored of the story, in fact, that he meant to put it in an academic paper. But in digging for the story’s source, he began to wonder if it was true. “It drew me in like a problem-solving ferret to a rabbit hole,” he said.

Soon he’d gone through every single Popeye strip ever drawn by its creator, E.C. Segar, and found that certain aspects of the classic story were clearly false. Popeye first ate spinach for his super power in 1931, Sutton found, and in the summer of 1932 the strip offered this iron-free explanation: “Spinach is full of vitamin ‘A,’” Popeye said, “an’ tha’s what makes hoomans strong an’ helty.” Sutton also gathered data on spinach production from the U.S. Department of Agriculture and learned that it was on the rise before Segar’s sailor-man ever starting eating it.

It seems plausible that the tellers of these tales are getting blinkered by their own feelings of superiority — that the mere act of busting myths makes them more susceptible to spreading them.
What about the fabled decimal point? According to Sutton’s research, a German chemist did overestimate the quantity of iron in spinach, but the mistake arose from faulty methods, not from poor transcription of the data.2 By the 1890s, a different German researcher had concluded that the earlier estimate was many times too high. Subsequent analyses arrived at something closer to the correct, still substantial value — now estimated to be 2.71 milligrams of iron per 100 grams of raw spinach, according to the USDA. By chance, the new figure was indeed about one-tenth of the original, but the difference stemmed not from misplaced punctuation but from the switch to better methodology. In any case, it wasn’t long before Columbia University analytical chemist Henry Clapp Sherman laid out the problems with the original result. By the 1930s, Sutton argues, researchers knew the true amount of iron in spinach, but they also understood that not all of it could be absorbed by the human body.3

The decimal-point story only came about much later. According to Sutton’s research, it seems to have been invented by the nutritionist and self-styled myth-buster Arnold Bender, who floated the idea with some uncertainty in a 1972 lecture. Then in 1981, a doctor named Terence Hamblin wrote up a version of the story without citation for a whimsical, holiday-time column in the British Medical Journal. The Hamblin article, unscholarly and unsourced, would become the ultimate authority for all the citations that followed. (Hamblin graciously acknowledged his mistake after Sutton published his research, as did Arbesman.)

In 2014, a Norwegian anthropologist named Ole Bjorn Rekdal published an examination of how the decimal-point myth had propagated through the academic literature. He found that bad citations were the vector. Instead of looking for its source, those who told the story merely plagiarized a solid-sounding reference: “(Hamblin, BMJ, 1981).” Or they cited someone in between — someone who, in turn, had cited Hamblin. This loose behavior, Rekdal wrote, made the transposed decimal point into something like an “academic urban legend,” its nested sourcing more or less equivalent to the familiar “friend of a friend” of schoolyard mythology.

Emerging from the rabbit hole, Sutton began to puzzle over what he’d found. This wasn’t just any sort of myth, he decided, but something he would term a “supermyth”: A story concocted by respected scholars and then credulously disseminated in order to promote skeptical thinking and “to help us overcome our tendency towards credulous bias.” The convolution of this scenario inspired him to look for more examples. “I’m rather a sucker for such complexity,” he told me.


Complicated and ironic tales of poor citation “help draw attention to a deadly serious, but somewhat boring topic,” Rekdal told me. They’re grabby, and they’re entertaining. But I suspect they’re more than merely that: Perhaps the ironies themselves can help explain the propagation of the errors.

It seems plausible to me, at least, that the tellers of these tales are getting blinkered by their own feelings of superiority — that the mere act of busting myths makes them more susceptible to spreading them. It lowers their defenses, in the same way that the act of remembering sometimes seems to make us more likely to forget. Could it be that the more credulous we become, the more convinced we are of our own debunker bona fides? Does skepticism self-destruct?


Sutton told me over email that he, too, worries that contrarianism can run amok, citing conspiracy theorists and anti-vaxxers as examples of those who “refuse to accept the weight of argument” and suffer the result. He also noted the “paradox” by which a skeptic’s obsessive devotion to his research — and to proving others wrong — can “take a great personal toll.” A person can get lost, he suggested, in the subterranean “Wonderland of myths and fallacies.”

In the last few years, Sutton has himself embarked on another journey to the depths, this one far more treacherous than the ones he’s made before. The stakes were low when he was hunting something trivial, the supermyth of Popeye’s spinach; now Sutton has been digging in more sacred ground: the legacy of the great scientific hero and champion of the skeptics, Charles Darwin. In 2014, after spending a year working 18-hour days, seven days a week, Sutton published his most extensive work to date, a 600-page broadside on a cherished story of discovery. He called it “Nullius in Verba: Darwin’s Greatest Secret.”

Sutton’s allegations are explosive. He claims to have found irrefutable proof that neither Darwin nor Alfred Russel Wallace deserves the credit for the theory of natural selection, but rather that they stole the idea — consciously or not — from a wealthy Scotsman and forest-management expert named Patrick Matthew. “I think both Darwin and Wallace were at the very least sloppy,” he told me. Elsewhere he’s been somewhat less diplomatic: “In my opinion Charles Darwin committed the greatest known science fraud in history by plagiarizing Matthew’s” hypothesis, he told the Telegraph. “Let’s face the painful facts,” Sutton also wrote. “Darwin was a liar. Plain and simple.”

Some context: The Patrick Matthew story isn’t new. Matthew produced a volume in the early 1830s, “On Naval Timber and Arboriculture,” that indeed contained an outline of the famous theory in a slim appendix. In a contemporary review, the noted naturalist John Loudon seemed ill-prepared to accept the forward-thinking theory. He called it a “puzzling” account of the “origin of species and varieties” that may or may not be original. In 1860, several months after publication of “On the Origin of Species,” Matthew would surface to complain that Darwin — now quite famous for what was described as a discovery born of “20 years’ investigation and reflection” — had stolen his ideas.

Darwin, in reply, conceded that “Mr. Matthew has anticipated by many years the explanation which I have offered of the origin of species, under the name of natural selection.” But then he added, “I think that no one will feel surprised that neither I, nor apparently any other naturalist, had heard of Mr. Matthew’s views.”

That statement, suggesting that Matthew’s theory was ignored — and hinting that its importance may not even have been quite understood by Matthew himself — has gone unchallenged, Sutton says. It has, in fact, become a supermyth, cited to explain that even big ideas amount to nothing when they aren’t framed by proper genius.

Sutton thinks that story has it wrong, that natural selection wasn’t an idea in need of a “great man” to propagate it. After all his months of research, Sutton says he found clear evidence that Matthew’s work did not go unread. No fewer than seven naturalists cited the book, including three in what Sutton calls Darwin’s “inner circle.” He also claims to have discovered particular turns of phrase — “Matthewisms” — that recur suspiciously in Darwin’s writing.

In light of these discoveries, Sutton considers the case all but closed. He’s challenged Darwin scholars to debates, picked fights with famous skeptics such as Michael Shermer and Richard Dawkins, and even written letters to the Royal Society, demanding that Matthew be given priority over Darwin.

But if his paper on the spinach myth convinced everyone who read it — even winning an apology from Terence Hamblin, one of the myth’s major sources — the work on Darwin barely registered. Many scholars ignored it altogether. A few, such as Michael Weale of King’s College, simply found it unconvincing. Weale, who has written his own book on Patrick Matthew, argued that Sutton’s evidence was somewhat weak and circumstantial. “There is no ‘smoking gun’ here,” he wrote, pointing out that at one point even Matthew admitted that he’d done little to spread his theory of natural selection. “For more than thirty years,” Matthew wrote in 1862, he “never, either by the press or in private conversation, alluded to the original ideas … knowing that the age was not suited for such.”


When Sutton is faced with the implication that he’s taken his debunking too far — that he’s tipped from skepticism to crankery — he lashes out. “The findings are so enormous that people refuse to take them in,” he told me via email. “The enormity of what has, in actual fact, been newly discovered is too great for people to comprehend. Too big to face. Too great to care to come to terms with — so surely it can’t be true. Only, it’s not a dream. It is true.” In effect, he suggested, he’s been confronted with a classic version of the “Semmelweis reflex,” whereby dangerous, new ideas are rejected out of hand.

Could Sutton be a modern-day version of Ignaz Semmelweis, the Hungarian physician who noticed in the 1840s that doctors were themselves the source of childbed fever in his hospital’s obstetric ward? Semmelweis had reduced disease mortality by a factor of 10 — a fully displaced decimal point — simply by having doctors wash their hands in a solution of chlorinated lime. But according to the famous tale, his innovations were too radical for the time. Ignored and ridiculed for his outlandish thinking, Semmelweis eventually went insane and died in an asylum. Arbesman, author of “The Half-Life of Facts,” has written about the moral of this story too. “Even if we are confronted with facts that should cause us to update our understanding of the way the world works,” he wrote, “we often neglect to do so.”

Of course, there’s always one more twist: Sutton doesn’t believe this story about Semmelweis. That’s another myth, he says — another tall tale, favored by academics, that ironically demonstrates the very point that it pretends to make. Citing the work of Sherwin Nuland, Sutton argues that Semmelweis didn’t go mad from being ostracized, and further that other physicians had already recommended hand-washing in chlorinated lime. The myth of Semmelweis, says Sutton, may have originated in the late 19th century, when a “massive nationally funded Hungarian public relations machine” placed biased articles into the scientific literature. Semmelweis scholar Kay Codell Carter concurs, at least insofar as Semmelweis was not, in fact, ignored by the medical establishment: From 1863 through 1883, he was cited dozens of times, Carter writes, “more frequently than almost anyone else.”

Yet despite all this complicating evidence, scholars still tell the simple version of the Semmelweis story and use it as an example of how other people — never them, of course — tend to reject information that conflicts with their beliefs. That is to say, the scholars reject conflicting information about Semmelweis, evincing the Semmelweis reflex, even as they tell the story of that reflex. It’s a classic supermyth!

And so it goes, a whirligig of irony spinning around and around, down into the depths. Is there any way to escape this endless, maddening recursion? How might a skeptic keep his sanity? I had to know what Sutton thought. “I think the solution is to stay out of rabbit holes,” he told me. Then he added, “Which is not particularly helpful advice.”

Footnotes

Its authors cite the story of the misplaced decimal point as an example of the “Bellman’s Fallacy” — a reference to a character from Lewis Carroll who says, “What I tell you three times is true.” Such mistakes, they wrote, illustrate “the ways in which truth may be obscured, twisted, or mangled beyond recognition, without any overt intention to do it harm.” ^
Another scholar with an interest in the spinach tale has found that in Germany, at least, the link between spinach and iron was being cited as conventional wisdom as early as 1853. This confusion may have been compounded by research that elided differences between dried and fresh spinach, Sutton says. ^
It’s long been suggested that high levels of oxalic acid — which are present in spinach — might serve to block absorption of iron, as they do for calcium, magnesium and zinc. Other studies find that oxalic acid has no effect on iron in the diet, though, and hint that some other chemical in spinach might be getting in the way. ^