Search This Blog

Saturday, 24 March 2018

All in the family?

Computer Software Sheds Light on Human and Chimp DNA Similarity
Walter Myers III

Recently I had the opportunity to hear Discovery Institute’s Stephen Meyer provide an update on the progress and current state of the theory of intelligent design. At the end of Meyer’s lecture, he took questions from the audience. Inevitably, the question came up about humans and chimpanzees with their “98 percent” similarity in DNA. Isn’t that evidence in favor of evolution and against design?

Meyer’s reply was to compare DNA code to differing computer programs that share an underlying code base. As a professional software engineer over more decades than I care to express here, I can attest to the accuracy of Meyer’s comparison. As he demonstrated in his book Signature in the Cell, the cell is a microscopic factory bustling with the activity of thousands of tiny machines built from the instructions provided by DNA code in the nucleus of the cell.

I am not going to enter into the debate about what precisely is the percentage of similarity in DNA between humans and chimps. That is wholly immaterial to the point I want to make. Instead, let’s see how the analogy that Meyer presented holds up by providing more depth and color using a practical example from the hardware and software most of us use in everyday life. Everyone reading this post (I suspect) is using a browser on either a computer, tablet, or smartphone. The device you are using has something installed on it called an operating system (OS). That is defined as “the collection of software that directs a computer’s operations, controlling and scheduling the execution of other programs, and managing storage, input/output, and communication resources.” The operating system provides all of the underlying functions necessary for the browser or any other application software (program) you may access on your device. Whatever the device you are using, the OS consists of tens of millions of lines of code. For example, the Windows operating system is estimated to have in excess of  50 million lines of code.

The diagram below represents the architecture of Windows NT, which is the line of operating systems produced and sold by Microsoft. Actually this diagram is a bit dated as Windows has “evolved” quite a bit with new features since 2000, but the fundamental concepts have not changed. It’s not essential that you understand this in full, but note the various subsystems that make up a modern OS.

Specifically, you have the “kernel” which is the core of the OS connecting application software to the underlying hardware. The kernel exercises complete control over the system and is fully protected from user applications, providing a set of well-defined interfaces by which an application can interact with the underlying services. You can liken the kernel here to the nucleus of a cell, which maintains the security of DNA and controls the functions of the entire cell by regulating gene expression. On top of the kernel, you have a “user” mode coordinating with the kernel that provides higher-level services such as your user interface, authentication mechanism (for logging in), and the environment in which your application code runs (in this case, your browser).  This would be analogous to the working proteins in the cell which would be the running code performing the everyday work of the cell.


Now let’s focus further on application code. While the OS is code that an OS company writes, such as Microsoft, Apple, or various open-source Linux distribution companies, applications are written by software developers. Applications themselves can also run to millions of lines of code. It depends upon the complexity and functionality of the application itself. What software developers have discovered over the decades, however, is that there are specific functions or patterns that developers perform over and over, and thus a considerable part of the software business consists of “third-party” developers writing and selling reusable “libraries” that make work easier for other developers. For example, in a typical application, you might have a library that assists with building the user interface, a library for database access, or a library for communications over a wireless network.

Applications on a smartphone, such as Facebook, Instagram, or Snapchat, can be thousands of lines of code, accessing component libraries that provide services made up of thousands or millions of lines of code, and of course accessing the aforementioned millions of lines of code in the underlying OS.

Now, comparing this to humans and chimps, what do we find? While much of the DNA code may be the same, the parts that are not the same have significant differences. The programs I described above, such as Facebook, Instagram, or Snapchat, have different purposes, yet they all depend on the same OS that consists of tens of millions of lines of code. To be specific, let’s say you are using an iPhone with iOS 11 (the Apple mobile OS) installed. iOS is estimated to take up about 4 GB of space on your iPhone. Facebook takes up about 297 MB. Snapchat is about 137 MB. Instagram is about 85 MB. Respectively, that’s 7.4 percent, 3.4 percent, and 2.1 percent of the size of iOS. Now would anyone say that Facebook, Instagram, and Snapchat are pretty much the same thing since they are each well over 90 percent the same? Of course not. It’s not so different with humans and chimps. In the case of these programs, the vast majority of their total code base is shared, yet each is a distinct creative expression that leverages a shared base of code. In the case of humans and chimps, one would expect a designer to use shared code where functions are the same, and different (new) code where functions are different. When we examine computer programs, which are the inventions of human minds, why would they not reflect the mind of the designer that wrote the code to produce humans, chimps, and every other biological organism?

There is a further relevant analogy between application software and DNA code. In biological organisms, not all genes are expressed in every cell. For example, there are specific genes active in liver cells, specific genes active in heart muscle cells, and specific genes active in brain cells. Different cell types express themselves in both appearance and function. So not all of the DNA code is in use in each cell. Additionally, environmental factors affect what genes are expressed in a group of cells, allowing an organism to respond in various ways to the situations in which it finds itself. Similarly, with software programs, not all pathways to all code are in use. There are “settings,” whether set by the user or programmed automatically in the application by the developer, that determine how a program will individually function. For example, when a user changes the privacy, language, or chat settings in the Facebook mobile app, it modifies the many pathways the code may execute. Or if a malevolent user tries to log in to a program multiple times, attempting to hack into a user account, the program will itself execute a code pathway to lock the malevolent user out and notify the legitimate user.

Again, the functions in a computer program reflect the mind of the human designer. In the same way, the functions in the human being programming a computer reflect the mind of the designer of both humans and chimps.

On the evolution of a Darwinist.

How Scott Turner Evolved
David Klinghoffer | @d_klinghoffer

On a new ID the Future episode, Rob Crowther talks with biologist J. Scott Turner about his book  Purpose and Desire: What Makes Something “Alive” and Why Modern Darwinism Has Failed to Explain It. Crowther wants to know how Turner and his thinking on evolution…evolved. Turner, of the State University of New York and currently a visiting scholar at Cambridge University, is a really interesting and sympathetic case of a scientist who straddles design and evolutionary thinking. How did he get to be where, intellectually, he is today?

He explains the impact that media coverage of the Dover trial had on in him, the smears directed at ID proponents, the trite attacks on “creationism” that seemed to have been preserved in vinegar from the Scopes Monkey Trial eighty years before. Turner met Stephen Meyer and other advocates of intelligent design. He was startled to find that they were quite a different crowd from what you’d imagine based on press coverage and published comments from Darwin defenders. Listen to the podcast or download it here.

Rob Crowther asks Dr. Turner what he’s learned since his book came out, and Turner mentions that it’s been a lesson in how “worldviews” shape and limit thinking. Do they ever.

What is a worldview, though? Sometimes I think the concept is not applied broadly enough. We all tell a story to ourselves about who we are, what kind of people we are, what kind of people it must be who would disagree with us on emotionally charged matters. This goes beyond controversies in biology, of course.

I was listening to an NPR report about — naw, I’m not going to say what it was about, it doesn’t matter. But I was listening to all these voices, the reporter and the people she was interviewing, and I was thinking about how they all sound so remarkably alike. Same manner of speaking, which is echoed by the distinctive production style. The reporter was telling a story, and everyone else was in her story, and she was in theirs, and they were all, transparently, just as pleased with themselves as they could be. That quality of almost giddy self-satisfaction is highly diagnostic. It is diagnostic of someone telling himself a tale, living in the world generated by his tale, but not realizing he is doing so.

Joan Didion famously said that “We tell ourselves stories in order to live.” The problem comes when you cannot identify your personal narrative, cannot step back and see it and yourself objectively. A tendency to uncontrolled storytelling continually molds Darwinist responses to Darwin skeptics. That, I think, is another way of stating the lesson Scott Turner has taken away from the experience of publishing Purpose and Desire.

Octopi v. Darwin and it's not even close.

More on Octopus RNA Editing — A Problem for Neo-Darwinism
Evolution News @DiscoveryCSC

Eric Metaxas at BreakPoint is one of our favorite popular commentators on evolution. In a broadcast, he takes note of our commentary here. As we noted last month, Octopus Genetic Editing — Animals Defy Their Own Neo-Darwinism.”
From Metaxas on how The Octopus Outsmarts Darwin Again”:

The Tel Aviv researchers found “tens of thousands” of such RNA recoding sites in cephalopods, allowing a creature like the octopus to essentially reprogram itself, adding “new riffs to its basic genetic blueprint.” In other words, these invertebrates don’t care that they didn’t inherit the smart genes. They make themselves smart, anyway.

Of course, an animal can’t be the author of its own intelligence, and this is not a process anyone believes cephalopods perform consciously. Rather, it is a marvelous piece of “adaptive programming” built-in to their biology.

Darwinists have tried to spin this feat as “a special kind of evolution.” But the folks at Evolution News cut through this nonsense and identify RNA editing for what it is: “non-evolution.”

“Neo-Darwinism did not make cephalopods what they are,” they write. “These highly intelligent and well-adapted animals edited their own genomes, so what possible need do they have for…blind, random, unguided” evolution?

This is also an emerging field of research, which means it’s possible, in theory, that other organisms make extensive use of RNA editing, and we’re just not aware of it, yet.

If, as  one popular science website puts it, other creatures can “defy” the “central dogma” of genetics, the implications for Darwin’s “tree of life,” and his entire theory, are dire.

But if cephalopods and the complex information processing that makes them so unique are in fact the result of a Programmer — of a Designer — the waters of biology become far less inky.
A friend asks if this phenomenon is an example of Lamarckism, according to which organisms evolve by adapting to their environments and then passing on newly acquired characteristics to their offspring. We wouldn’t call it that, but we do call it a problem for neo-Darwinism. Among other reasons, that’s because it reveals that organisms need much more information than is provided by DNA sequences. Therefore, DNA mutations cannot provide sufficient raw materials for evolution.

This latest research is impressive, but RNA editing is not new. As Eric Metaxas smartly anticipates, there is indeed extensive RNA editing in other organisms, too — including humans.

Care for documentation? Find it here:

Peng Z, Cheng Y, Tan BC, Kang L, Tian Z, et al. (2012) Comprehensive analysis of RNA-Seq data reveals extensive RNA editing in a human transcriptome.Nature Biotechnology 30:253-260
Bahn JH, Lee JH, Li G, Greer C, Peng G, et al. (2012) Accurate identification of A-to-I RNA editing in human by transcriptome sequencing.   Genome Research 22:142-150
Sakurai M, Ueda H, Yano T, Okada S, Terajima H (2014)  A biochemical landscape of A-to-I RNA editing in the human brain transcriptome.  Genome Research (January 9, 2014)

That would make the problem for Darwinism even more acute than Eric suggests.

On the war against the sacred name:The Watchtower Society's commentary.

The Fight Against God’s Name:
HIS name was Hananiah ben Teradion. He was a Jewish scholar of the second century C.E., and he was known for holding open meetings where he taught from the Sefer Torah, a scroll containing the first five books of the Bible. Ben Teradion was also known for using the personal name of God and teaching it to others. Considering that the first five books of the Bible contain the name of God more than 1,800 times, how could he teach the Torah without teaching about God’s name?

Ben Teradion’s day, however, was a dangerous time for Jewish scholars. According to Jewish historians, the Roman emperor had made it illegal under penalty of death to teach or practice Judaism. Eventually, the Romans arrested Ben Teradion. At his arrest he was holding a copy of the Sefer Torah. When responding to his accusers, he candidly admitted that in teaching the Bible, he was merely obeying a divine command. Still, he received the death sentence.

On the day of his execution, Ben Teradion was wrapped in the very scroll of the Bible that he was holding when arrested. Then he was burned at the stake. The Encyclopaedia Judaica says that “in order to prolong his agony tufts of wool soaked in water were placed over his heart so that he should not die quickly.” As part of his punishment, his wife was also executed and his daughter sold to a brothel.

Although the Romans were responsible for this brutal execution of Ben Teradion, the Talmud* states that “the punishment of being burnt came upon him because he pronounced the Name in its full spelling.” Yes, to the Jews, pronouncing the personal name of God was indeed a serious transgression.

The Third Commandment:
Evidently, during the first and second centuries C.E., a superstition regarding the use of God’s name took hold among the Jews. The Mishnah (a collection of rabbinic commentaries that became the foundation of the Talmud) states that “one who pronounces the divine name as it is spelt” has no portion in the future earthly Paradise promised by God.

What was the origin of such a prohibition? Some claim that the Jews considered the name of God too sacred for imperfect humans to pronounce. Eventually, there was a hesitancy even to write the name. According to one source, that fear arose because of a concern that the document in which the name was written might later end up in the trash, resulting in a desecration of the divine name.

The Encyclopaedia Judaica says that “the avoidance of pronouncing the name YHWH . . . was caused by a misunderstanding of the Third Commandment.” The third of the Ten Commandments given by God to the Israelites states: “You must not take up the name of Jehovah your God in a worthless way, for Jehovah will not leave the one unpunished who takes up his name in a worthless way.” (Exodus 20:7) Hence, God’s decree against the improper use of his name was twisted into a superstition.

Surely, no one today claims that God would have someone burned at the stake for pronouncing the divine name! Yet, Jewish superstitions regarding God’s personal name still survive. Many continue to refer to the Tetragrammaton as the “Ineffable Name” and the “Unutterable Name.” In some circles all references to God are intentionally mispronounced to avoid violating the tradition. For example, Jah, or Yah, an abbreviation for God’s personal name, is pronounced Kah. Hallelujah is pronounced Hallelukah. Some even avoid writing out the term “God,” substituting a dash for one or more letters. For instance, when they wish to write the English word “God,” they actually write “G-d.”

Further Efforts to Hide the Name
Judaism is by no means the only religion that avoids using the name of God. Consider the case of Jerome, a Catholic priest and secretary to Pope Damasus I. In the year 405 C.E., Jerome completed his work on a translation of the entire Bible into Latin, which became known as the Latin Vulgate. Jerome did not include God’s name in his translation. Rather, following a practice of his time, he substituted the words “Lord” and “God” for the divine name. The Latin Vulgate became the first authorized Catholic Bible translation and the basis for many other translations in several languages.

For instance, the Douay Version, a 1610 Catholic translation, was basically a Latin Vulgate translated into English. It is no surprise, then, that this Bible did not include God’s personal name at all. However, the Douay Version was not just another Bible translation. It became the only authorized Bible for English-speaking Catholics until the 1940’s. Yes, for hundreds of years, the name of God was hidden from millions of devoted Catholics.

Consider also the King James Version. In 1604 the king of England, James I, commissioned a group of scholars to produce an English version of the Bible. Some seven years later, they released the King James Version, also known as the Authorized Version.

In this case too, the translators chose to avoid the divine name, using it in just a few verses. In most instances God’s name was replaced by the word “LORD” or “GOD” to represent the Tetragrammaton. This version became the standard Bible for millions. The World Book Encyclopedia states that “no important English translations of the Bible appeared for more than 200 years after the publication of the King James Version. During this time, the King James Version was the most widely used translation in the English-speaking world.”

The above are just three of the many Bible translations published over the past centuries that omit or downplay the name of God. It is no wonder that the vast majority of professed Christians today hesitate to use the divine name or do not know it at all. Granted, over the years some Bible translators have included the personal name of God in their versions. Most of these, however, have been published in more recent times and with minimal impact on the popular attitudes toward God’s name.

A Practice in Conflict With God’s Will:
The widespread failure to use God’s name is based strictly on human tradition and not on Bible teachings. “Nothing in the Torah prohibits a person from pronouncing the Name of God. Indeed, it is evident from scripture that God’s Name was pronounced routinely,” explains Jewish researcher Tracey R. Rich, author of the Internet site Judaism 101. Yes, in Bible times God’s worshipers used his name.

Clearly, knowing God’s name and using it brings us closer to the approved way of worshiping him, the way he was worshiped in Bible times. This can be our first step in establishing a personal relationship with him, which is much better than simply knowing what his name is. Jehovah God actually invites us to have such a relationship with him. He inspired the warm invitation: “Draw close to God, and he will draw close to you.” (James 4:8) You may ask, however, ‘How could mortal man enjoy such intimacy with Almighty God?’ The following article explains how you can develop a relationship with Jehovah.

  Hallelujah:
   What comes to your mind when you hear the term “Hallelujah”? Perhaps it reminds you of Handel’s “Messiah,” a musical masterpiece from the 1700’s that features the dramatic Hallelujah chorus. Or you may think of the famous American patriotic song “The Battle Hymn of the Republic,” also known as “Glory, Hallelujah.” Surely, from one source or another, you have heard the word “Hallelujah.” Perhaps you even use it from time to time. But do you know what it means?

Hallelujah—The English transliteration of the Hebrew expression ha·lelu-Yahʹ, meaning “praise Jah,” or “praise Jah, you people.”
Jah—A poetic shortened form of the name of God, Jehovah. It appears in the Bible more than 50 times, often as part of the expression “Hallelujah.”

God’s Name in Your Name?

Many Bible names are still popular today. In some cases the original Hebrew meaning of these names actually included the personal name of God. Here are a few examples of such names and their meaning. Perhaps your name is one of them.

Joanna—“Jehovah Has Been Gracious”

Joel—“Jehovah Is God”

John—“Jehovah Has Shown Favor”

Jonathan—“Jehovah Has Given”

Joseph—“May Jah Add”*

Joshua—“Jehovah Is Salvation”

Bible Terms for God

The Hebrew text of the Holy Scriptures uses numerous terms for God, such as Almighty, Creator, Father, and Lord. Yet, the instances in which he is referred to by his personal name far outnumber all of the other terms combined. Clearly, it is God’s will that we use his name. Consider the following list of terms as they appear in the Hebrew Scriptures.*

Jehovah—6,973 times

God—2,605 times

Almighty—48 times

Lord—40 times

Maker—25 times

Creator—7 times

Father—7 times

Ancient of Days—3 times

Grand Instructor—2 times

On Darwinism's latest revision of the family album.

Another Day, Another “Rewrite” on Human Origins



No doubt others will weigh in on the significance, or lack of it, of the latest ballyhooed news from the world of human origins research: the dating of remains identified as archaic Homo sapiens to 300,000+ years ago, which is about 100,000 years older than the previously known oldest specimens.

The cache of fossils, meager as usual, were found in North Africa — Jebel Irhoud, Morocco – which was one surprise. From the story at 
Newly discovered fossil discoveries in Africa have pushed back the age we know modern humans roamed the Earth by roughly 100,000 years — and injected profound doubt into what we thought we knew about where humanity first arose.


“This material represents the very roots of our species — the oldest Homo sapiens ever found in Africa or elsewhere,” said Jean-Jacques Hublin,, an anthropologist at the Max Planck Institute for Evolutionary Anthropology, in a press conference this week. Hublin was the lead researcher for one of the two studies published on the discoveries in yesterday’s issue of the journal Nature.

Up until now, the  oldest definitive modern human fossils were known to be around 200,000 years old, and had been found in modern-day Ethiopia. These discoveries helped cement the dominant theory among anthropologists in recent decades that modern humans, Homo sapiens,  evolved in East Africa and then migrated north into Asia and Europe. This region has therefore been dubbed the “cradle of humankind”  (though South Africa also lays claim to the title ).

“Our results challenge this picture in many ways,” Hublin said. The fossils his team studied come from a cave in central Morocco, thousands of miles away from East Africa. They suggests that, by 300,000 years ago, modern humans had already spread across Africa. Recall that the continent that was much easier to cross then, with lush  grasslands and lakes residing where the forbidding Sahara Desert lies today.

In the mainstream science venues I don’t see any direct acknowledgment of the challenge that, given conventional assumptions about human ancestry, that means considerably less time, 100,000 years less, for unguided evolutionary processes to accomplish the transition to us. Pushing origins back in time – whether of our species, whales, or life itself – is rarely good news for evolution.

The hive mind of science journalism tends not to notice such things. Speaking of which, do you ever observe how many headlines on stories like this all seem to have been written by the same person?

“Oldest Fossils of Homo Sapiens Found in Morocco, Altering History of Our Species” — New York Times
“Earliest fossil evidence of Homo sapiens found in Morocco, rewriting the story of our species” – Los Angeles Times
“A new fossil discovery in Morocco will rewrite the history of human evolution” – Quartz
“New Fossil Discovery Rewrites History of First Human Beings” – Extreme Tech
“The story of human evolution in Africa is undergoing a major rewrite” – Vox
“Oldest Homo sapiens fossil claim rewrites our species’ history” – Nature
It would be an interesting study for another time to dig down and figure out who gave them this language about “rewriting” history to begin with, which, once given, is taken up and repeated by a range of publications.

The word choice, though, is ironic. In a normal editorial process, “rewriting” is done typically to bring enhanced clarity. But when it comes to human origins, the truth is much closer to what biologist Jonathan Wells reminded us of the other day.

The problem with such fossil finds is that they never provide the lasting clarity about human origins that scientists, and the public, crave. “Instead of ending up with a nice clean line from an apelike creature, a chimpanzee-like creature,” says Dr. Wells, a U.C. Berkeley-trained embryologist, “each discovery complicates matters even more than they were complicated before.”
The more that experts on human evolution know about our origins, the less they seem to actually understand. Given evolutionary presuppositions, the direction of research and learning is not from lesser to greater clarity, but just the opposite. The result is, as Scientific American more candidly puts it, a “mess” (“Ancient Fossils from Morocco Mess Up Modern Human Origins”). If that is the case, maybe the problem is with the presuppositions.

Wednesday, 21 March 2018

Primeval nanotech v. Darwinism.

Irreducible Complexity in Molecular Machine Assembly
Evolution News @DiscoveryCSC

We know that many molecular machines are irreducibly complex (IC) in their operation. Even more IC is the process of assembling them in the cell. A good example of this is the process of building our good old standby machine, ATP synthase (review our  animation to Proceedings of the National Academy of Sciences recognize the F0 rotating part and the F1 synthesis part).

A new “tour de force” paper by He et al. in the Proceedings of the National Academy of Sciences (PNAS), co-authored by Nobel laureate John E. Walker (who at age 77 is still researching these tiny rotary engines), describes new insights into how these multi-part machines are assembled. In a companion Commentary on PNAS, three scientists (Song, Pfanner and Becker) put it bluntly: “The assembly of the mitochondrial ATP synthase is a complicated process that involves the coordinated association of mitochondrially and nuclear encoded subunits.” Here’s a taste of what they mean (don’t worry; this won’t be on the test):

Based on their findings [He et al.], they propose an elegant model of how the membrane domain of human ATP synthase is built (Fig. 1, Upper). In one branch, an F1–c-ring intermediate associates with the peripheral stalk and the supernumerary subunits e and g. In the other branch, the F1 domain first assembles with the peripheral stalk and supernumerary subunits e, g, and f. Both pathways merge in a key assembly intermediate that contains the F1 domain, the c-ring, the peripheral stalk, and the supernumerary subunits e, g, and f. In all these vestigial [i.e., incomplete] ATP synthase complexes, the inhibitory protein IF1 is enriched to prevent ATP hydrolysis by the uncoupled ATP synthase. The presence of the supernumerary subunits e, g, and f is crucial for the subsequent integration of the mitochondrially encoded subunits ATP6 and ATP8 that are stabilized by addition of 6.8PL. Thus, the proton-conducting channel between ATP6 and the c-ring is formed. At this stage, ATP synthesis is coupled to the proton-motive force and the inhibitory protein IF1 is released. Finally, DAPIT is added to the assembly line to promote dimerization and oligomerization of the ATP synthase.

Whether or not you can follow the jargon is not as important as what they witnessed:  an “elegant” process that requires precise timing and coordination. Different machine parts must arrive on schedule, and assemble into intermediate (vestigial) forms that are nonfunctional alone. An inhibitor protein makes sure the machine doesn’t switch on ahead of schedule. The proton-conducting channel has to form just right so that it doesn’t “leak” protons. Only when all the parts are ready does the machine begin to rotate, but even then, the work isn’t complete. Another player is “added to the assembly line” to position the machines on the folds of the mitochondrial membrane (called cristae) at precise angles and spacings for optimum productivity.

The parts must arrive at the construction site on time. Some of them come from the nucleus, which must seem like many miles away at the scale of the machine. Some are built locally by genes within the mitochondrial genome. Interestingly, there are differences between yeast and humans regarding which genes are encoded where, and in what order they are assembled. But the proof of the pudding is in the respiration after eating: both versions of the machine work efficiently for their respective organisms.

The intermediate structure, somewhat like a scaffold on which the machine will be built, is also irreducibly complex:

We have shown that the assembly of human ATP synthase in the inner organellar membrane involves the formation of a monomeric intermediate made from 25 nuclear-encoded proteins into which the two mitochondrially encoded subunits are inserted and then sealed by association of another nuclear-encoded protein, thereby dimerizing the complex. Association of a final nuclear protein oligomerizes the dimers back-to-face along the cristae edges.

Notice that parts from the different genomes have to work tightly together. It’s like a manufacturing plant receiving parts locally and from India that have to meet agreed-on specifications to match. There are also rules for import, just like for parts arriving from a far country. The nuclear-encoded parts have to pass through two distinct checkpoints (the inner and outer membranes of the mitochondrion), which each have their robotic security personnel to validate them and facilitate their transport to the inside.

Previous work has shown how the completed “factory” of machines is organized within the mitochondrion. A specific nuclear protein seals them in two’s (dimers) at an angle, such that the rotating F0 proton pumps can maximize the intake of proton fuel, while the F1 parts, where ATP synthesis occurs, are farther apart to not crowd the output molecules. A “final nuclear protein” joins the dimers together (oligomerizes them) along the membrane edges. The longitudinal spacing is also tightly controlled, so that they don’t crowd each other. Every point of the assembly is programmatically directed. When everything is completed, rows of ATP synthase motors are arranged like turbines in a hydroelectric plant, feeding off a flow of protons produced by upstream machines in the respiration transport chain.

Ribosome Assembly

Viewers of cellular animations like those in Unlocking the Mystery of Life could never forget the assembly-line process inside the ribosome, where precisely-sequenced messenger RNAs are matched with transfer RNAs carrying amino acids to form proteins. The entrance tunnels for the ingredients and the exit tunnels for the polypeptides, and everything in between, must be positioned exactly for correct operation. The ribosome is certainly one of the most stunning examples of information translation in all of nature. But how is the ribosome itself built?

Nature  has provided an early version of an unedited manuscript by Sanghai et al. about ribosome assembly. Although it has been accepted for publication, it will be subject to editorial revisions. The subject matter, though, appears to show another stunning case of irreducible complexity in the construction of this important molecular machine. Here’s the Abstract:

Early co-transcriptional events of eukaryotic ribosome assembly result in the formation of precursors of the small (40S) and large (60S) ribosomal subunits. A multitude of transient assembly factors regulate and chaperone the systematic folding of pre-ribosomal RNA subdomains. However, due to limited structural information, the role of these factors during early nucleolar 60S assembly is not fully understood. Here we have determined cryo-EM reconstructions of the nucleolar pre-60S ribosomal subunit in different conformational states at resolutions up to 3.4 Ã…. These reconstructions reveal how steric hindrance and molecular mimicry are used to prevent both premature folding states and binding of later factors. This is accomplished by the concerted activity of 21 ribosome assembly factors that stabilize and remodel pre-ribosomal RNA and ribosomal proteins. Among these factors, three Brix-domain proteins and their binding partners form a ring-like structure at rRNA domain boundaries to support the architecture of the maturing particle. Mutually exclusive conformations of these pre-60S particles suggest that the formation of the polypeptide exit tunnel is achieved through different folding pathways during subsequent stages of ribosome assembly. These structures rationalize previous genetic and biochemical data and highlight the mechanisms driving eukaryotic ribosome assembly in a unidirectional manner.

The requirements of IC are met in this description: “a multitude of transient assembly factors” regulate and systematically fold the proteins that will be used to construct the machine. The authors mention “21 ribosome assembly factors that stabilize and remodel” the RNA and proteins before the machine is even operational. Inside the growing ribosome, a scaffold holds factors for the exit tunnel in place. Everything is choreographed in time and space with “mechanisms driving… assembly in a unidirectional manner.”

Here we see numerous parts working together on a timeline. The parts alone do not work individually. You can have all the proteins delivered to the construction site, and nothing will happen without the programmed mechanisms to put them together in order. Some parts hold others in place, others guide the folding of protein parts, and some even prevent premature assembly. All the pathways for assembly of the subdomains are regulated by a master program, so that each group of steps follows a “unidirectional” plan toward the finished product. It’s a marvelous IC assembly process that produces an IC machine. If five parts of a mousetrap are sufficient to indicate IC, how about dozens of parts, all following a programmatic sequence of assembly?

In Unlocking, concerning the assembly of the bacterial flagellum, Paul Nelson described how the hierarchical IC of machine assembly in the cell challenges Darwinian theory. “In order to construct that flagellar mechanism, or tens of thousands of other such mechanisms in the cell, you require other machines to regulate the assembly of these structures. And those machines themselves require other machines for their assembly.” Jonathan Wells nailed the point by saying, “If even one of these pieces is missing, or put in the wrong place, your motor isn’t going to work. So this apparatus to assemble the flagellar motor is itself irreducibly complex. In fact, what we have here is irreducible complexity all the way down.”

Ancient whale returns for second helping of Darwinists' homework.

Of Whales and Timescales
Andrew Jones

Joshua Swamidass, assistant professor in the Department of Pathology and Immunology at Washington University, has responded to an Evolution News article about whale evolution. The original article concluded:

We don’t find the “pattern” that evolution predicts “should be found in the fossil record at certain times.” Rather, we find that truly aquatic whales appear abruptly. And even if we accept some of the fossils as “intermediates” between whale and land mammals, there is not enough time for the complex adaptations needed for whales’ fully aquatic lifestyle to evolve. Whatever the correct explanation is for the origin of whales, unguided evolutionary mechanisms are not the answer.

Swamidass writes:

Looking at this progression [of skulls] we uncover an amazing fact. Surprisingly, whales have the same body plan as a terrestrial mammal! It’s the same body plan, with several intermediate forms. Looking at several features (e.g. ears, bone density, teach), we can see this transition beautifully. Look how we can see the nostrils slowly move back to the top of the head…

Yes, it is beautiful. One adapted for land, another for water, and one is intermediate. But take care; nothing is actually moving in those pictures. Any transition is in the interpretive imagination of the beholder.

Getting back to the claim that millions of years is “not enough time.” There is no genetic or mathematical analysis to back up this conjecture. What types of genetic changes are required for whale evolution? How unlikely or likely are they?

Consider a paper published in PLOS Computational Biology,  The Time Scale of Evolutionary Innovation. The authors explore how long it should take for evolution to make a complex coordinated change to a sequence. They find that mutation alone would be little different from creating a completely fresh sequence each time using random letters, but that if natural selection is acting to “regenerate” the original sequence, and if the original sequence happens to be near the target, then evolution is much more likely to make the transition. This should be common sense, I think. Note the core result: a sequence of length L requiring only k specific coordinated changes will require Lk+1 trials. They describe this as “polynomial” because it is polynomial in L but it is exponential in k.

What this means is, if it takes 100 generations for a specific mutation to occur, it will take (at least) 10 thousand generations for a specific set of 2 mutations to occur, and 100 million generations for a specific set of 4 mutations to occur. At human generation lengths that would be 2 billion years. Two billion years, for a 4-letter “innovation.” That puts a hard limit on what kind of magic we can expect from evolution. This basic problem is then greatly exacerbated by population genetic effects; each mutation must not only occur, it must become fixed or at least well established in the population, and there is no selection to help until you get the last mutation in the set.

Now consider the mutations that actually have occurred in humans in recent human history. Some have been interesting, including significant tweaks to melanism, and milk digestion, but none of them are spectacular (no X-Men) and certainly none have constructed new biochemical systems or new healthy morphology. The waiting times problem informs us that all the mutations in the history of the human species must have been similarly banal. Think about that for a moment. Are you surprised? You should be if you believe we evolved from something like an ape. Evolution has to work one step at a time. It cannot do the kind of complex coordinated magic that a human designer or engineer can. If you believe humans evolved, you have to believe it can happen without any complex coordinated changes at all (in this context complex means just 4 or more specific letters at the same time — not 4 new proteins). In fact, the exponential character of the waiting times problem tells us that all of evolution must have been similarly limited, right back to the Cambrian explosion. In turn, that raises the question of how the radical innovations of the Cambrian explosion could have occurred

From a Batmobile to a Yellow Submarine

The Evolution News article argued that for a land mammal to become a whale, or a Batmobile to become a Yellow Submarine, it would require multiple coordinated changes. To many people this would be a trivial and common sense assumption, even without the detail given in the article. However, citing another paper, “Molecular evolution tracks macroevolutionary transitions in Cetacea,” Swamidass pushes back:

It is remarkable how many of the changes required for whale evolution are caused by loss of function mutations (which end causing “pseudogenes”), or small tweaks to proteins. This is one of the big surprises of mammalian evolution. Large changes can take place with tweaks to the genetic code. Eyes adapt to underwater vision by losing a rhodopsin gene. Hind Limbs are lost with the loss of a homeobox gene. Taste buds are lost when two genes are lost. Smell receptors are almost entirely lost in most species too. In all these cases, we see remnants of the broken genes, and in many cases the details of how these losses increase function are well understood.

This is all true. It is true that overall efficiency can be increased by losing unused functional components. On a design view, it makes sense to deactivate things that are not being used. Often this needs no more than a flip of a switch, and this no great challenge for evolution either. On the other hand, why would the random loss of information make a functional body plan? Researchers have created legless mice by knocking out a Hox gene, in an effort to understand snakes, but the resulting mice were simply paralyzed and could not mate. Also, note that at some point evolution has to explain the origin of all the proteins, Hox genes, as well as the rhodopsins and receptors that have been lost, and that is rather more difficult. An evolutionary process that creates nothing new is soon going to run out of other organisms’ proteins to borrow.

Remarkably, it does not appear any new enzymes or de novo genes are required in whale evolution. It appears that small tweaks to existing proteins, or loss or alteration of the function of existing genes, account for the changes we see at this point.

True, that is not where the challenge to whale evolution lies. But why is it remarkable to see no new genes? It turns out that a large number of genes are taxonomically restricted or ORFan genes. That means they seem to appear without evolutionary history in the twigs and leaves of the tree of life. Moreover, some even  turn out to be essential, which would be very odd if they have been added last by evolution. The existence of these genes is a common problem elsewhere in the evolutionary story, even though it appears not to be relevant to whales. Protein coding genes are hard to explain when they appear de novo — see  Doug Axe’s work as well as this recent EN article .

Also, there does not appear to be any reason that a large number of these changes must happen at the same time. They appear gradually in the tree, and it’s not clear at all why they would need to be “coordinated”. They do not appear to need to occur at the same place and time to be useful. So this does not make these transitions unlikely.

This is where we have to disagree. Some of the changes listed might be independent, small, and easy, but there are also some pretty massive, complex ones that would seem to need coordination. For one example, losing tooth enamel does not make baleen plates. For another, whale testes are inside the body. In itself this appears to be a trivial change, and it makes good design sense in terms of streamlining. That is, until you try to implement it, and find that mammalian testes become infertile if kept too warm, so now you need a cooling system, or else a redesign of the reproductive system. It is not so trivial any more. And the changes do need to be coordinated. What selective advantage is there in a cooling system? None unless you have testes there. What selective advantage is there in internal testes? None, unless a cooling system is there. It turns out that dolphins and whales have mysteriously acquired an elaborate counter-current cooling system that keeps the testes the same cool temperature as its fins! That system is not trivial, and it is not going to evolve with just one or two mutations.

A Design Perspective

Considering this from a design perspective, and speaking as one experienced in doing design, it’s going to be tough to convince me that one could “evolve” a program with a series of single-letter changes, deletions, and random copy-pasting, all while it continues to compile and function. The notion is in conflict with our experience of how complex functional systems actually work.

We have also argued that homoplasies constitute evidence of common design. Swamidass argues these can be explained by convergent evolution

Also, we also see convergent mutations between whales, bats (echolocation), and beavers (diving adaptations to blood). These “homoplasies” are the rare exceptions to the nested clade pattern of common descent, and are exactly what we expect in evolutionary process, just like we see recurrent mutations in cancer, and convergent evolution in human HLA variation. Everyone agrees that human variation arises by natural processes, and that cancer arises by natural processes, yet we see homeoplasies here too; this is what we expect from common descent.

But another way of looking at it is that evolutionists have adapted their expectations in response to the evidence that homoplasies exist. They have long known about character traits that don’t fit the canonical Darwinian explanation of a branching tree, and convergent evolution has long been proposed as an explanation, but the truth is the authors of the original paper on bats and whales found this particular result “surprising” and “remarkable.”  From my own experience, I remember trying to persuade an evolution-evangelist that molecular homoplasies exist between extremely distant species, and he wouldn’t believe me! I will try to explain why.

Now, convergent evolution can happen, but it really depends on the particular circumstances. Convergence in cancer and HLA are different from convergence between bats and dolphins because they both involve very high mutation rates, coupled with strong selection effects acting on very small changes. In HLA the changes are concentrated in a tiny region of the genome. In cancer, strong positive selection acts on mutations that each help the cancer, but destroy some normal function.

It is one thing to find weak points where cars tend to break independently in the same place, or even how one breakage leads to another (e.g., brakes first then everything else as it careens of the road). It would be quite strange if cars independently acquired sonar capabilities, and even more so if the software upgrades were identical.

Imagine you assign a coding exercise to computer-science students. It is quite possible that two students would come to roughly the same solution, since there are likely only a few good solutions. That is how convergent evolution is supposed to work. However, what if they had not only the same general solution, but identical code too? Or imagine two history students write an essay about the causes of World War I. It is possible they come to the same conclusion. But if you see identical prose, you have to suspect plagiarism: it strongly suggests that the text or code was designed once and then used multiple times.

And that is why my friend did not believe me; because he did not believe the coding sequences would converge. It is easy to imagine, if evolution could find complex solutions at all, that it could find something similar again, but it is much harder to imagine that evolution would converge on the same code, since the mutations that write it are supposed to be random, especially if the code has been diverging for some time. However, now that we have found that there are molecular homoplasies at great taxonomic distance, the committed evolution-believers are surprisingly unfazed: all it means, they argue, is that there must be only one solution that works and natural selection finds it every time (while listening to Wagner). That’s an interesting theory, but can you prove it? If the evolution of complex traits is so predictable and reliable, it seems we should be able to set it up and see it happening.

Meanwhile, can you hear the students accused of plagiarism? But sir, it’s the only solution! And we are both geniuses! Hmm. If you are both geniuses, I look forward to your next assignment.

There is a much more parsimonious solution: common design.

Either way, the larger problem is that the changes involved in adapting a generic mammalian template into a whale are certainly not all simple, independent, single-letter changes. It seems obvious that multiple coordinated changes would be needed, and it turns out that would require a lot longer than mere millions of years.