Search This Blog

Sunday 17 December 2017

On early Coptic translations re:John1:1

Translating John 1:1: The Coptic Evidence

(Solomon Landers, September 2006)
The Coptic translation of John 1:1
1a. Š„‹‘…Š‚‹‹ŒŠ …Œ}†
1b. }‘Œ}†Š‚‹‹ŒŠŠ}„Ž‰ŒŠ‹‘
1c. }‘Š‘Š‹‘ŒŒ}†
It is becoming well-known that the primary Coptic translations of John 1:1c – the
Sahidic, the proto-Bohairic, and the Bohairic – do not render it “the Word was
God,” as is common in many English versions, but “the Word was a god,” found
notably in the New World Translation.
The significance of this is remarkable. First, the Coptic versions precede the New
World Translation by some 1,700 years, and are part of the corpus of ancient textual
witnesses to the Gospel of John. Second, the Coptic versions were produced at a
time when the Koine Greek of the Christian Greek Scriptures was still a living
language whose finer nuances could be understood by the Coptic translators, so
much so that many Greek words are left untranslated in the Coptic texts. Third,
the Coptic versions do not show the influence of later interpretations of Christology
fostered by the church councils of the 4
th
and 5
th
centuries CE.
The Greek text of John 1:1c says, E
construction that can be literally rendered as, “and a god was the Word.”
Likewise, the Sahidic Coptic text of John 1:1c reads, }‘Š‘Š‹‘Œ
Œ}†, an indefinite construction that literally says “and a god was the Word.”
Coptic grammarians agree that this is what the Coptic says literally. But the
theological presuppositions of certain grammarians do not allow them to be
satisfied with that reading. Just as they attempt to do with the Greek text of John
1:1c, certain Evangelical scholars seek to modify the clear impact of “a god was the
Word.”
But whereas the Greek text allows for some ambiguity in an anarthrous
construction, the Coptic text does not allow for the same ambiguity in an indefinite
construction. Unlike Koine Greek, Coptic has not only the definite article, but the
indefinite article also. Or, a Coptic noun may stand without the article, in the “zero
article” construction. Thus, in Coptic we may find : ŒŠ‹‘ , “the god,”
‹‘Š‹‘, “a god,” or Š‹‘, “god.”
The Sahidic Coptic indefinite article is used to mark “a non-specific individual or
specimen of a class: a morpheme marking an element as a non-specific or individual
or specimen of a class (“a man,” “other gods,” etc.).” – Coptic Grammatical
Chrestomathy (Orientalia Lovaniensia Analecta, 1988), A. Shisha-Halevy, p. 268
Given these clear choices, it cannot but be highly relevant to their understanding of
the meaning of John 1:1c that the Coptic translators of the Greek text chose to
employ the Coptic indefinite article in their translation of it.
Were the Coptic translators looking at John 1:1c qualitatively, as has been
suggested by some scholars in their analysis of the Greek text? That is not likely,
since the Coptic text does not use the abstract prefixes before the count noun for
god, Š‹‘. They were specifically calling the Word “a god,” and only in the
sense that a god is also “divine” can a translation in the order of “the Word was
divine” be glossed from the Coptic text. Whereas “the Word was divine” can be a
legitimate English paraphrase of the Coptic text, it is not the literal reading.
The Coptic evidence is significant given the fact that Bible scholars have roundly
chastised the New World Translation for its supposedly “innovative” rendering, “the
Word was a god” at John 1:1c. But this very way of understanding the Greek text
of John 1:1c now proves to be, not new, but ancient, the same translation of it as
given at a time when people still spoke the Greek that John used in composing his
Gospel.
But what about John 1:18, where the Coptic text has the definite article before
Š‹‘ with reference to the only[-begotten] Son: ŒŠ‹‘Œy•ŽŠ‹‘?
Certain Evangelical scholars have asked, ‘Is it reasonable that the Coptic
translators understood the Word to be “a god” at John 1:1 and then refer to him as
“the god,” or “God,” at John 1:18?’
That is a logical question, but the logic is backwards. Since John 1:1 is the
introduction of the Gospel, the more logical question is ‘Is it reasonable that the
Coptic translators understood the Word to be God at John 1:18 after referring to
him as “a god” at John 1:1c?’
No. Although the Coptic translators use the definite article at John 1:18 in
identifying the Word, this use is demonstrative and anaphoric, referring back to the
individual , “the one who” is previously identified as “a god” in the introduction.

Thus, John 1:18 identifies the Word specifically not as“God,” but as “the god”
previously mentioned who was “with” (“in the presence of,” Coptic: ŠŠ} Ž‰)
God. This god, who has an intimate association with his Father, is contrasted with
his Father, the God no one has ever seen.
A modern translation of the Coptic of John 1:18 is “No one has ever seen God at all.
The god who is the only Son in the bosom of his Father is the one who has explained
him,´as found at
http://copticjohn.com
Being closer in time to the original writings of the apostle John, and crafted at a
time when Koine Greek was still spoken and well-understood, the Coptic evidence
weighs heavily in the direction of those who see in the Gospels a Jesus who is not
God, but the Son of God, a divine being who is “the image of the invisible God,” but
not that Invisible God himself. This one is the Representative of his Father, who

declared the Good News of salvation to mankind, and sanctified his Father’s Name.

Using design to disprove design?

The Origin of Life: Dangers of Taking Research Claims at Face Value
Brian Miller

In  an article here yesterday, I wrote about philosopher Vincent Torley’s critique of my posts related to the origin of life, and I corrected his errors on thermodynamics. Today, I will correct the errors related to the state of origins research. As a general overview, origin-of-life research falls into two categories. The first is experiments that attempt to accurately model the conditions on the early Earth. The classic example is the Stanley Miller experiment which started with a combination of reducing gases (methane, ammonia, and hydrogen) that were believed to exist, and the researchers applied to the mixture electrical discharges. The resulting reactions produced several amino acids, heralded as a major breakthrough.

Unfortunately, scientists later recognized that the early atmosphere was not likely so reducing Instead, it contained a different combination of gases including carbon dioxide. All subsequent experiments conducted with more realistic starting ingredients failed to produce the building blocks of life (amino acids, carbohydrates, nucleotides, and lipids) in significant quantities. An additional challenge for all such experiments, including Miller’s, was that they produced other byproducts that would have caused deleterious cross reactions. Such conditions would have prevented any subsequent stages leading to life. All roads led to dead ends.

The consistent failure of realistic experiments led to a second class which do not attempt to model actual conditions on the early Earth. Instead, they follow what is termed prebiotic synthesis. Origins expert Robert Shapiro outlined the typical process used for RNA in his  analysis of origin-of-life research. Such experiments involve a long series of highly orchestrated steps which include purifying desired products, removing unwanted byproducts, changing physical and chemical conditions, adding unrealistically high concentrations of assisting substances, and other interventions to ensure that the target molecules are achieved.

Attempting to relate such research to actual events on the early Earth leads to an almost comical series of dozens of highly improbable events. Various proposed origins scenarios over the years have involved meteorite showersvolcanos, poisonous gas, and other phenomena coupled to the precise transportation of lucky molecules through a series of multiple subsequent environments while always passing through the perfect intermediate conditions. Torley actually describes just such a fanciful scenario proposed by Sutherland. As an amusing side note, a friend reviewed origins research, and she was not sure if she was reading about scientific theories or the synopsis of the next Michael Bay natural disaster movie. Ironically, such synthesis experiments actually bolster the design argument by demonstrating that the origin of the building blocks of life and their subsequence assembly require substantial intelligent direction.

My previous article described how two of the major obstacles to the origin of life are overcoming the free energy barriers and producing the fantastically improbable configurations of atoms associated with life. The synthetic experiments bypass these challenges through intelligent intervention. As an illustration, the origin of complex molecules such as RNA and lipids must start with high free-energy solutions of reactants. However, the abundance of such sets of molecules under natural conditions drops exponentially with their free energy. Researchers overcome this challenge by starting with highly concentrated solutions of the ideal combination of pure chemicals. Highly concentrating the chemicals artificially  increases their effective free energies, so reactions are driven in the desired direction.

In reality, many of the proposed starting molecules for origins theories would have quickly reacted on the early Earth with other molecules in the environment preventing substantial buildup (See The Mystery of Life’s Origins, Ch. 4). This challenge also holds true for the origination of any  autocatalytic system of reactions which is another essential component for life’s origins. The dilemma is similar to that of an entrepreneur who wishes to start a business to generate a profit, but starting it requires a million dollars for an initial investment. Unfortunately, the entrepreneur is destitute and has no credit for borrowing the needed capital. As a result, he has no way to even take the first step.

The configurational challenge relates to the fact that vast numbers of chemical reactions could take place on the early Earth. However, life’s origin requires that only specific ones proceed while other far more likely ones are blocked. This hurdle relates both to the origin of the building blocks and of cellular metabolism. In addition, in large molecules the atoms can take on numerous configurations, and the right ones are exceptionally unlikely. Shapiro described how the atoms in RNA could form hundreds of thousands to millions of other stable organic molecules. Researchers overcome this challenge by forcing the atoms to achieve the desired arrangements through tightly controlling the reaction steps. Such constraining of outcomes parallels  role of information in constraining messages in information theory. And, the relationship between information and precise causal control in biology was made explicit in the talk by Paul Griffiths at the Royal Society meeting on New Trends in Evolutionary Biology.

To summarize, researchers have shown how the origin of life might proceed through intelligent design, not blind processes.Shapiro illustrates this point beautifully in analyzing the experiments of John Sutherland, but his comments relate to all such experiments.

Reviewing Sutherland’s proposed route, Shapiro noted that it resembled a golfer, having played an 18 hole course, claiming that he had shown that the golf ball could have, through some combination of wind, rain, heating, cooling, dehydration, and ultraviolet irradiation played itself around the course without the golfer’s presence.

In Torley’s article he references several prebiotic synthesis experiments, but he fails to appreciate their irrelevance to the origins problem for the reasons outlined above. For instance, he describes how Sutherland and other researchers used ultraviolet light to help promote reactions leading the life. What Torley missed was that these experiments used a very specific wavelength of light (e.g., 240 nanometers at the ideal intensity for the optimal amount of time to drive the desired reactions. If the experiments had used light mimicking that from the sun hitting the early Earth, they would have failed since other wavelengths would have destroyed the target moleculesThe difference between the use of light in the experiments and the actual sun parallels the difference between the fire from a blowtorch used by a skilled craftsman and an open fire burning down a building.

Torley also describes how different researchers were able to drive key reactions even when they contained contaminants. For instance, Sutherland included a phosphate at the beginning of his experiments designed to create nucleotides. Similarly,  Jack Szostak’s group created vesicles (containers) out of two fatty acids which could house an RNA enzyme (ribozyme), and he added Mg2+ which under other conditions would have prevented vesicles from forming. However, the relevance of these experiments was greatly exaggerated.

The use of such terms as “contaminant” and “messy” is highly misleading. Phosphate is an essential component of the target nucleotide molecules, and Mg2+ was essential for activating the ribozymes. They were able to include these molecules because the experiments were meticulously designed to ensure they would produce the desired outcomes. If molecules were added which would have been abundant on the early Earth (true contaminants), the experiments would have failed. As an analogy, the researchers resemble car owners boasting about how their car engines could function even in the presence of such “contaminants” as gasoline and motor oil. However, if sand and glue were added, the engines would have fared far less well.

Torley mentions one additional class of studies which use simulations to attempt to address origin-of-life challenges. Specifically, he references Nigel Goldenfeld’s research to solve the homochirality problem  — many building blocks of life can come in either a right-handed or a left-handed form, but life requires only one handedness (homochiral). The results from simulation experiments are generally treated with great caution since they can be designed to model any imaginable conditions and to proceed according to any desired rules.

As a case in point, Goldenfeld’s study is based on an  abstract mathematical model and numerical simulations that center on an achiral (mirror image is the same as itself) molecule interacting with the right and left-handed versions (enantiomers) of a chiral molecule to yield another copy of the latter. For instance, the “autocatalytic” reaction could start with one left-handed amino acid and end with two left-handed amino acids. The simulation set the dynamics of the reactions to eventually lead to a pure mixture of one enantiomer.


The main challenge with these results is that the underlying model is completely unrealistic. No chiral building block of life (e.g. right-handed ribose) has been shown to interact with any substance to self-replicate. On the contrary, in all realistic environments mixtures with a bias of one enantiomer tend toward mixtures of equal percentages of both left-handed and right-handed versions. Goldenfeld “solved” the homochirality problem by creating an artificial world that eliminated all real-world obstacles. All simulations that purport to be breakthroughs in origins problems follow this same pattern. Conditions are created that remove the numerous practical challenges, and the underlying models are biased toward achieving the desired results.

The skilled trades;still the smart choice. III

Saturday 16 December 2017

On Christ's loyal and wise steward:The Watchtower Society's commentary.

FAITHFUL AND DISCREET SLAVE

When answering the apostles’ question concerning his future presence and the conclusion of the existing system of things, Jesus Christ included a parable, or illustration, dealing with a “faithful and discreet slave.” The faithful slave’s master appointed him over his domestics, or household servants, to provide them their food. If approved at his master’s coming (evidently from some trip), the slave would be rewarded by being placed over all the master’s belongings.—Mt 24:3, 45-51.

In the parallel illustration at Luke 12:42-48, the slave is called a steward, that is, a house manager or administrator, one placed over servants, though he is himself a servant. Such a position was often filled in ancient times by a faithful slave. (Compare Ge 24:2; also the case of Joseph at Ge 39:1-6.) In Jesus’ illustration the steward is first assigned only to the supervision and timely dispensation of the food supplies to the master’s body of attendants, or servants, and later, because of his faithful and discreet handling of this ministry, his assignment is widened out to embrace supervision of all the master’s holdings. Regarding the identification of the “master” (Gr., kyʹri·os, also rendered “lord”), Jesus had already shown that he himself occupied such a position toward his disciples, and they addressed him as such on occasion. (Mt 10:24, 25; 18:21; 24:42; Joh 13:6, 13) The question remains concerning the application of the figure of the faithful and discreet slave, or steward, and what his dispensing food to the domestics represents.

“Slave” is in the singular. This, however, does not require that the “slave” prefigure only one particular person who would be so privileged. The Scriptures contain examples of the use of a singular noun to refer to a collective group, such as when Jehovah addressed the collective group of the Israelite nation and told them: “You are my witnesses [plural], . . . even my servant [singular] whom I have chosen.” (Isa 43:10) The “antichrist” is shown to be a collective group made up of individual antichrists. (1Jo 2:18; 2Jo 7) Similarly, the “slave” is composite. It was to be appointed in the time of the end as a channel to give out spiritual “food at the proper time.” (Mt 24:3, 45; Lu 12:42) In the first century, Jesus set a pattern for how spiritual food would be dispensed in the Christian congregation. Just as he had distributed literal food to the crowds through the hands of a few disciples, spiritual food was to be provided through the hands of a few. (Mt 14:19; Mk 6:41; Lu 9:16) Jesus trained the apostles for the role they would have after Pentecost 33 C.E. as a channel in dispensing spiritual food. They were later joined by other elders to serve as a governing body in order to settle issues and to direct the preaching and teaching of the Kingdom good news. (Ac 2:42; 8:14; 15:1, 2, 6-29) After the death of the apostles, a great apostasy set in. But in the time of the end—in keeping with the pattern he set in the first century of feeding many through the hands of a few—Jesus selected a small group of spirit-anointed men to serve as “the faithful and discreet slave,” to prepare and dispense spiritual food during his presence.


The domestics are all those who belong to the Christian congregation, both the anointed and the “other sheep,” who are fed spiritual food. (Joh 10:16) This includes the individual members making up “the faithful and discreet slave,” since they too are recipients of the food dispensed. Those who make up the faithful slave will receive expanded responsibility if they are found faithful at the master’s promised coming. When they receive their heavenly reward and become corulers with Christ, he will appoint them over “all his belongings.” Along with the rest of the 144,000, they will share Christ’s vast heavenly authority.—Mt 24:46, 47; Lu 12:43, 44.

The left is right?:Pros and cons.

A clash of Titans.LXV

File under "Well said" LVII

That it is better 100 guilty Persons should escape than that one innocent Person should suffer, is a Maxim that has been long and generally approved.
Benjamin Franklin.

It's official, Wikipedia has become the Borg.

Wikipedia Co-Founder Blasts “Appallingly Biased” Wikipedia Entry on Intelligent Design
David Klinghoffer | @d_klinghoffer

When it comes to intelligent design, Wikipedia and its axe-grinding editors are ridiculously biased and unfair. And guess what? Even Wikipedia co-founder  Larry Sanger  agrees. He wrote as much last week on the Talk page for the Wiki article on ID,  under the heading, “My $0.02 on the issue of bias”:

As the originator of and the first person to elaborate Wikipedia’s neutrality policy, and as an agnostic who believes intelligent design to be completely wrong, I just have to say that this article is appallingly biased. It simply cannot be defended as neutral. If you want to understand why, read this. I’m not here to argue the point, as I completely despair of persuading Wikipedians of the error of their ways. I’m just officially registering my protest. —Larry Sanger (talk)  05:30, 8 December 2017 (UTC)

A philosophy PhD, Dr. Sanger worked with Jimmy Wales to found Wikipedia in 2001. He is a self-described  zealot for neutrality,” and reasonably concludes that Wikipedia’s content on intelligent design is anything but neutral. This is the man who came up with the name “Wikipedia.” He further introduces himself on his Talk page:

I’m no longer associated with Wikipedia, which I co-founded. (I named it, crafted much of the policy that now guides the project, and led the project for its first year. As Jimmy Wales declared on March 25, 2002, a week before I resigned, I was “the final arbiter of what the consensus is” on Wikipedia.)

A thoughtful reader discovered Sanger’s candid comment after he (the reader) sought to edit the entry on ID. He says he corrected the absurdly biased opening sentence, only to find his edits almost instantly reversed, “within one minute.” The first sentence of the   entry reads:

Intelligent design (ID) is a religious  argument for the existence of Godpresented by its proponents as “an evidence-based scientific theory about life’s origins”,[1][2] though it has been found to be  pseudoscience.[3][4][5]

This matters for an obvious reason: countless people curious about ID receive their introduction to the subject via a Web search that starts, thanks to Google, with a visit to the Wikipedia article. Many will stop right there. Many science reporters and others in the media — heck, many professional scientists — seem to have informed themselves on the topic by going no further than Wikipedia.  You don’t have to be a neutrality “zealot” to understand that evidence of design in nature (not the “existence of God”) poses a question of huge, urgent interest, that serious scientific (not religious, or pseudoscientific) arguments are made for ID, and that it does a terrible disservice to public awareness to so grossly mislead readers. (And not only readers. Don’t forget anyone who uses  Amazon’s Alexa.)That is the case even if ID is ultimately wrong, or “completely wrong,” as Sanger puts it.

In a long and carefully argued essay, Why Neutrality?”, he laments, “There’s a great latent demand for neutral content, and the demand is unmet.” And that is no doubt true. However, at Wikipedia, a masked mob of pseudonymous trolls has taken over and the public’s “latent demand” is permanently blocked from being satisfied. As I’ve pointed out,  many editors hardly bother to hide their ideological bias.

An interesting article at the news site Vice gives the background on Sanger’s involvement with Wikipedia.

It was Sanger, then, who synthesized emerging “wiki” technology with Nupedia’s original vision. Sanger came up with the name “Wikipedia,” wrote its founding documents, and spent the next 14 months as the site’s sole paid editor and philosophical leader. But as word about the project spread throughout web, Wikipedia and Sanger were inundated with new users, some of them trolls, who plagued Sanger with “edit wars” and resisted input from experts. In 2002, Sanger left Wikipedia and became an outspoken critic of the site, criticizing its quality and the disregard many users displayed for experts.

Indeed. We’ve already recounted how distinguished paleo-entomologist Günter Bechly, after coming out for intelligent design, found his entry deleted. This was following a surreal online editorial discussion led by an editor going by the pseudonym Jo-Jo Eumerus. Jo-Jo is a self-described 23-year-old “boy” from Switzerland with a dual online identity as a 500-year-old wizard. Under this other identity, the wizard  Septimus Heap, Jo-Jo explains of himself that, having been “diagnosed with Asperger syndrome,” he “sometimes [has] problems with society due to this.” Certainly he had a problem with Günter Bechly. The editors claimed the move to delete the entry was the result of their sudden realization that Bechly isn’t “notable” enough for Wikipedia. The  notability argument is a joke, and  even Darwinists conceded that Bechly was deleted for his support of ID.

It was Jo-Jo who made the final decision to permanently pull the plug on Dr. Bechly’s entry. The disparity in expertise — wizard versus paleo-entomologist — is blindingly obvious. Bechly changed his views on evolution and ID while serving as a curator at the State Museum of Natural History in Stuttgart, Germany, where he amassed an extremely impressive scholarly record studying the evolution of dragonflies over tens of millions of years. As Jo-Jo says of his own daily activities, “Nowadays, I mostly spend my time with World Building projects and seeing a bit forward with life.”


For more on Bechly’s turn to ID, see here:



Another ID scholar, Walter Bradley at Baylor University, suffered  comparable treatment at the hands of the fantastical pseudonyms editing Wikipedia. Manhandled by entities including Freakshownerd, Apollo The Logician, and Theroadislong, Dr. Bradley was not erased but he did see his entry disemboweled, reduced to nearly nothing.

You can’t fight back because people like Jo-Jo, Freakshownerd, etc. seem to have unlimited time at their disposal to revert edits they don’t like, over and over and over, at lightning speed. The sociology is interesting, but so is the psychology. As Larry Sanger recounts his experiences, Wikipedia from the start attracted not only trolls as editors, but trolls with, in some cases, mental problems.

There was one guy called 24, but I suspect that he was literally insane. He wrote some really wacked-out stuff. And there’s there another one called LIR. That person was… abrasive is not the right word, and [them] being confrontational wasn’t the problem. It was them doing so needlessly, for no good purpose other than to stir the pot. Because [Wikipedia] was wide open, and anybody could participate, there were people who would spent a lot of their time wasting everyone else’s time. I doubt that many of those people are just “bad,” they might just be abrasive, confused… “mentally unhinged,” in a few cases.

Having all that leisure to volunteer in “editing” online encyclopedia articles might correlate with being retired, or a dedicated hobbyist, or it could correlate with being on the margins, someone with “problems with society,” “confused,” “wacked-out,” “unhinged,” even “insane.” I apologize if this sounds unkind. But high-functioning people — employed or with other serious responsibilities, with friends, families, community commitments, and more — are not ideally suited to be Wikipedia editors or to engage in the endless editing wars that go along with it.

And this, again, is how a large segment of the public is introduced to the subject of intelligent design. The page received 30,494 views in the past 30 days alone.It’s not only the ID entry and related articles that are twisted by bias and inaccuracy, of course. But design, as I said, poses an ultimate question that scientists and philosophers have been discussing for millennia, and will go on discussing. That is not true of many other controversial subjects on Wikipedia.

It’s a real shame. As Larry Sanger says, we “despair of persuading Wikipedians of the error of their ways.” Sadly, there’s not much you can do about it — other than to warn your friends, family, and other contacts to be wary and  consult other sources. And that I certainly urge you to do.

We know more we understand less?

Are Scientists Smarter Now, or Dumber?
David Klinghoffer | @d_klinghoffer

A conversation with a friend of our oldest son solicited, if I understood correctly, the observation from this friend that people including students know more, are better educated, than in previous generations, thanks to things like the Internet. This is a very bright and curious young man, but I was dumbfounded by his statement.

He pointed to the fact that we, as a culture, “know more” than ever before. That is true in a limited sense, but acquisition of data is a long way from having the wisdom to understand and interpret it, which I think is what we mean when we talk about the kind of smarts that really matters. It’s what you do with what you know.

On the gathering specifically of scientific knowledge, our paleontologist colleague Günter Bechly nails it in a comment on Facebook:

My theory is: Scientists nowadays are far dumber than scientists centuries ago, which is a consequence of over-specialization and lack of philosophical education in natural science university curricula. The only reason why we know so much more than centuries ago is time, much larger number of scientists, and much more resources pumped into science, which resulted in an explosion of knowledge acquired by dumber scientists.

This might explain the unthinking dismissal of an idea like intelligent design not just by media people with a tendency to shallowness, but by scientists. I mentioned here the other day that even professionals in the sciences often seem to have gleaned the little they understand about ID from skimming the main Wikipedia article.


ID is a quintessential multidisciplinary field of study, asking us to consider not only biology but chemistry, cosmology, philosophy, and more. As Dr. Bechley points out, the trend to ever greater specialization combined with philosophical illiteracy go a long way toward explaining the condition of our “dumb” scientists.

Why OOL Science remains design Opponents' weakest point II

The Origin of Life: The Information Challenge
Brian Miller

I previously responded to an article by Vincent Torley on the origin of life by correcting the errors in his understanding of thermodynamics and in the state of origins research. Today, I will correct mistakes related to information theory, and I will identify the fundamentally different approaches by ID advocates and critics toward assessing evidence.

Semantic Information
The first issue relates to the comparison of the sequencing of amino acids in proteins to the letters in a sentence. This analogy is generally disliked by design critics since it so clearly reveals the powerful evidence for intelligence from the information contained in life. It also helps lay audiences see past the technobabble and misdirection often used to mislead the public, albeit unintentionally.

Torley’s criticism centers on the claim that sequences of amino acids in life demonstrate functional but not semantic information.

Dr. Miller, like Dr. Axe, is confusing functional information (which is found in living things) with the semantic information found in a message…functional information is much easier to generate than semantic information, because it doesn’t have to form words, conform to the rules of syntax, or make sense at the semantic level.

Unfortunately, this assertion completely contradicts the opinion of experts in the field such as Shen and Tuszynski.

Protein primary structures have the same language structure as human languages, especially English, French, and German. They are both composed of several basic symbols as building blocks. For example, English is composed of 26 letters, while proteins are composed of 20 common amino acids. A protein sequence can be considered to represent a sentence or a paragraph, and the set of all proteins can be considered to represent the whole language. Therefore, the semantic structure is similar to a language structure which goes from “letters” to “words,” then to “sentences,” to “chapters,” “books,” and finally to a “language library.”


The goals of semantic analysis for protein primary structure and that for human languages are basically the same. That is, to find the basic words they are composed of, the meanings of these words and the role they play in the whole language system. It then goes on to the analysis of the structure of grammar, syntax and semantics.

In the same way letters combine to form meaningful sentences, the amino acids in proteins form sequences that cause chains to fold into specific 3D shapes which achieve such functional goals as forming the machinery of a cell or driving chemical reactions. And sentences combine to form a book in the same way multiple proteins work in concert to form the highly integrated cellular structures and to maintain the cellular metabolism. The comparison is nearly exact.

Sequence Rarity
A second issue Torley raises is the question of the rarity of protein sequences. In particular, he argues that the research of Doug Axe, which demonstrated extreme rarity, was invalid. Criticisms against Axe’s work have been addressed in the past, but the probability challenge is so great that such a response is unnecessary. The most essential early enzymes would have needed to connect the breakdown of some high-energy molecule such as ATP with a metabolic reaction which moves energetically uphill. One experiment examined the likelihood of a random amino acid sequence binding to ATP, and results indicated that the chance was on the order of   one in a trillion. Already, the odds against finding such a functional sequence on the early Earth is straining credibility. However, a useful protein would have required at least one other binding site, which alone squares the improbability, and an  active site which properly oriented target molecules and created the right chemical environment to drive and interconnect two reactions  — the breakdown of ATP and the target metabolic one. The odds of a random sequence stumbling on such an enzyme would have to have been far less than 1 in a trillion trillion, clearly beyond the reach of chance.

The challenge for nucleotide based enzymes (ribozymes) is equally daunting. Stumbling across a random sequence that could perform even one of the most basic reactions also requires a search library  in the trillions.So, any multistage process would also be beyond the reach of chance. A glimmer of hope was offered by Jack Szostak when he published a paper that purported to show RNA could self-replicate without the aid of any enzyme. Unaided self-repliation would have greatly aided the search process. However, he later retracted the paper after the results could not be reproduced.

The problem has since been shown to be even worse. In particular, Eugene Koonin determined that the probability of an RNA-to-protein translation system forming through random arrangements of nucleotides is less than 1 in 101000 which would equate to an impossibility in our universe. His solution to this mathematical nightmare was to propose a probabilistic deus ex machina. He actually argued for the  existence of a multiverse which would contain a virtually infinite number of Earth-like planets. We just happen to reside in a lucky universe on the right planet where life won a vast series of lotteries.

Genetic Code
The next issue relates to the problem of explaining how a protein sequence was encoded into RNA or DNA using a genetic code where each amino acid corresponds to sets of three nucleotides known as codons. The key challenge is finding a causal process for the encoding when no physical or chemical connection exists between a given amino acid and its corresponding codons. Torley argues that a connection does exist. He quotes from Dennis Venema who stated that certain codons bind directly to their amino acids. Unfortunately, this claim is false. Venema was referencing the research by Michael Yarus, but he misinterpreted it. Yarus states that no  direct physical connect exists exists between individual amino acids and individual codons. He instead argues for correlations in chains of nucleotides (aptamers) between amino acids and codons residing where the latter binds to the former. However, Koonin argued  that correlations only existed for a handful of amino acids, and they were the least likely ones to have formed on the early Earth.

Torley references the article where Koonin dismisses Yarus’s model, but he misinterprets him by implying that the code could be partly explained by some chemical connection. Koonin does reference the possibility of the evolution of the modern translation system being aided by chemical attractions between amino acids and pockets in tRNA. But he states that the sequences in those pockets would have been “arbitrary,” so they would not relate to the actual code. As a result, no physical explanation exists for the encoding of amino acid sequences into codons, nor can the decoding process either be explained or directly linked to the encoding process. Such a linkage is crucial since the  encoding and decoding  must use the same code. However, without any physical connection, the code must have preexisted the cell particularly since both processes would have had to have been instantiated around the same time. The only place a code can exist outside of physical space is in a mind.

Examining Assumptions
In my responses to Torley I have addressed several problems with his interpretation of specific experiments. However, a more fundamental issue is the differences between our overall approaches to evaluating evidence, which I will illustrate with an analogy. Imagine that a boxing match is scheduled between Daniel Radcliffe, actor who played Harry Potter, and Manny Pacquiao, former world boxing champion. You learn that the fight will take place in three days and Radcliffe recently broke his leg and two arms in a skiing accident. You tell your friend that you are certain Pacquiao will win. Your friend then says that you are mistaken since Radcliffe will simply heal his body with a flick of his magical wand and then turn Pacquiao into a rat. You suddenly realize that your friend is conceiving of the fight in the imaginary world of Hogwarts from the fantasy series.

The same difference in perspectives exists between ID proponents and materialist scientists. The former wish to focus on experiments that attempt to accurately model conditions on the early Earth and on actual physical processes that have been demonstrated. In contrast, the latter wish to focus on highly orchestrated experiments which have no connection to realistic early conditions and on physical processes that reside only in the imaginations of researchers or in artificial worlds created through simulations. For instance, Torley references an an  article that proposes hydrogen peroxide  could have assisted in generating homochiral mixtures of nucleotides, but the author fully acknowledges that his ideas are purely speculative. Likewise, Koonin describes a scenario of how the protein translation system could have evolved, but nearly every step is only plausible if intelligently guided. In other words, he is constantly smuggling in design without giving due credit. To accept any of these theories requires blind faith in the materialist philosophical assumptions.

At the end of his article, Torley navigates out of the stormy seas of scientific analysis into the calmer waters of philosophical discourse which is his specialty. He argues that one can never prove design. On this point he is correct, if by prove one means demonstrating with mathematical certainty. The ID program does not claim to offer the type of absolute proof a mathematician would use to demonstrate the truth of the Pythagorean Theorem. Instead, we are arguing that the identification of design is an inference to the best explanation  which can be made with the same confidence one would have in identifying design in the pattern of faces on Mount Rushmore or in a signal from space which contained the schematics of a spaceship.


The skeptic could always argue that some materialistic explanation might eventually be found to explain those patterns, so design cannot be proven. Yet, the identification of design is still eminently reasonable. The evidence for design in the simplest cell is unambiguous since it contains energy conversion technology, advanced information processing, and automated assembly of all of its components, to name just a few features. The real issue is not the evidence but whether people’s philosophical assumptions would allow them to deny the preposterous and embrace the obvious.