Search This Blog

Saturday, 24 June 2017

Protein folds v. Darwin.

Escape from Randomness: Can Foldons Explain Protein Functional Shapes?
Evolution News @DiscoveryCSC

Does the subject of protein folding excite you? Read this to see why perhaps it should:

Protein folding is among the most important reactions in all of biology. However, 50 y after C. B. Anfinsen showed that proteins can fold spontaneously without outside help, and despite the intensive work of thousands of researchers leading to more than five publications per day in the current literature, there is still no general agreement on the most primary questions. How do proteins fold? Why do they fold in that way? How is the course of folding encoded in a 1D amino acid sequence? These questions have fundamental significance for protein science and its numerous applications. Over the years these questions have generated a large literature leading to different models for the folding process. [Emphasis added.]
In short, your life depends on protein folding, and the subject provides a classic contest between intelligent design and scientific materialism. That’s enough to make a thoughtful person take notice.

The quoted passage comes from a paper in the Proceedings of the National Academy of Sciences by two biophysicists at the University of Pennsylvania. They review the vast corpus of literature on the subject to assess the best current models for explaining how one-dimensional sequences of amino acids can end up as three-dimensional shapes that perform functional work. To appreciate the challenge, try to assemble a string of beads, some of which have electric charges or attractions to water, that will, when let go, spontaneously fold into a tool. Your cells do something like that all the time, and usually do it right.

Biologic Institute research scientist Douglas Axe has worked on the problem of protein folding for much of his career. He has been joined by another scientist, Discovery Institute’s Ann Gauger, to show why protein folding gives evidence for intelligent design. The subject is also discussed at length in Axe’s most recent book, Undeniable: How Biology Confirms Our Intuition That Life Is Designed (Harper One, 2016).

Here’s the problem for materialism in a nutshell: the number of ways you can assemble amino acids that won’t fold vastly exceeds the ways that will fold. To expect a random process to search “sequence space” (the set of all sequences of amino acids) and arrive at one that folds is so highly improbable, it will likely never occur in multiple universes. Axe followed Michael Denton’s hunch that “functional proteins could well be exceedingly rare” and put some numbers to it. He determined that there is only “one good protein sequence for every 10^74 bad ones” (Undeniable, p. 57). This was about 10 million billion billion billion times more improbable than Denton’s initial estimate.

As Axe goes on to say, materialists didn’t exactly put “out of business” signs on their doors when he published his results. That brings us to the current paper — one of the latest attempts to find a way to avoid the implications of design and find a natural, unguided means of searching sequence space for those elusive folds.

The authors, S. Walter Englander and Leland Mayne, know all too well that random search is hopeless. Even in the 1990s, “Levinthal had contributed the seminal observation that a random search could not account for known folding rates.” Most proteins find their native fold extremely rapidly — some in microseconds. Some need a little help from “chaperones” such as GRO-EL that allow the polypeptide to fold in a barrel-like chamber. In either case, the authors know that random attempts at finding the proper or “native” fold, even for a correctly-sequenced polypeptide, would be far too slow if there were many pathways to the correct fold. This led scientists early on to suspect that proteins follow an energy landscape that nudges them to the native fold, much like a funnel guides ball bearings down a narrow hole. The ball may bounce around in the funnel, but the shape of the energy landscape forces it in the right direction. This is known as energy landscape theory (ELT).

A critical feature of the funneled ELT model is that the many-pathway residue-level conformational search must be biased toward native-like interactions. Otherwise, as noted by Levinthal (57), an unguided random search would require a very long time. How this bias might be implemented in terms of real protein interactions has never been discovered.
The authors are not content with evolutionary just-so stories:

One simply asserts that natural evolution has made it so, formulates this view as a so-called principle of minimal frustration, and attributes it to the shape of the funneled energy landscape. Proteins in some unknown way “know” how to make the correct choices.
Sorry, no dice.

A calculation by Zwanzig et al. at the most primary level quantifies the energy bias that would be required. In order for proteins to fold on a reasonable time scale, the free energy bias toward correct as opposed to incorrect interactions, whatever the folding units might be, must reach 2 kT (1.2 kcal/mol). The enthalpic bias between correct and incorrect interactions must be even greater, well over 2 kcal/mol, because competition with the large entropic sea of incorrect options is so unfavorable. Known amino acid interaction energies, less than 1 kcal/mol (59), seem to make this degree of selectivity impossible at the residue–residue level.
Are we excited yet? This is getting really interesting. The suspense is growing. With randomness out of the question, what will they do?

They basically take a divide-and-conquer approach. Getting a big polypeptide to fold is too hard, but maybe if they can break the problem down into bite-size chunks, they can get to the target without intelligence. After all, it’s much easier to knit an afghan if the granny squares come ready-made so that you don’t have to make each one from scratch. “Quantized” in this manner, the problem becomes more tractable.

The structural units that assemble kinetic intermediates are much the same as the cooperative building blocks of the native protein. This strategy separates the kinetic folding puzzle into a sequence of smaller puzzles, forming pieces of the native structure and putting them into place in a stepwise pathway (Fig. 1B). This is the defined-pathway model.
They give the name “foldon” to a small chain of amino acids “perhaps 15 to 35 residues in size” that folds a little bit. If the polypeptide is composed of a number of these prefabricated foldons, maybe the whole protein will find its native fold quickly, descending the funnel in a stepwise fashion. Experiments unfolding and refolding some proteins actually show this kind of stepwise energy landscape. They like that:

The purpose of this paper is to consider the present status of these quite different models and relate them to the central questions of protein folding — how, why, and the encoding problem. We propose to rely on the solid ground of experiment rather than the countless less-definitive suggestions and inferences that have been so often used in this difficult field.
Empirical rigor; what’s not to like about that? So instead of imagining a correct sequence of amino acids from scratch, they substitute a sequence of foldons, increasing the probability of completing the search in time. Will this work in evolutionary terms?

The opposed defined-pathway model stems from experimental results that show that proteins are assemblies of small cooperative units called foldons and that a number of proteins fold in a reproducible pathway one foldon unit at a time. Thus, the same foldon interactions that encode the native structure of any given protein also naturally encode its particular foldon-based folding pathway, and they collectively sum to produce the energy bias toward native interactions that is necessary for efficient folding.
So how, exactly, did this clever solution emerge without intelligence?

Available information suggests that quantized native structure and stepwise folding coevolved in ancient repeat proteins and were retained as a functional pair due to their utility for solving the difficult protein folding problem.
“Co-evolution” again. So much for empirical rigor. They’re back to just-so storytelling mode. Let’s think this through. Each granny square in the quilt is a product of chance, according to materialist resources. Does a black granny square know that it will fit nicely into a complete quilt following a geometrical pattern of black, red and yellow squares? Unless each granny square has an immediate function, evolution will not preserve it. Similarly, no foldon will be “retained” with some future hope that it might have “utility for solving the difficult protein folding problem.” The foldon couldn’t care less! It had to be functional right when it emerged.

An intelligent designer could plan foldons as a useful strategy for constructing various complex proteins in a modular way. A designer could even preserve useful foldons, much like a computer programmer writes subroutines to use in other programs. Unless each subroutine actually does something useful for the system as a whole, though, what good is it? Say you have a subroutine that says, “Repeat whatever argument arrives in the input register.” Unless the system needs that function as part of what it’s doing, you can run the subroutine till the cows come home and nothing good will come of it.

In short, the foldon strategy doesn’t lower the probability of success, and it doesn’t solve “the difficult protein folding problem” for the evolutionist. It’s all divide and no conquer.

Englander and Mayne make a big deal out of “repeat proteins” that make up about 5 percent of the global proteome. These repeat proteins “have a nonglobular body plan made of small repeated motifs in the 20–40 residue range that are assembled in a linear array.” Are they good candidates for foldons? We know that many proteins contain repetitive structures like alpha coils and beta sheets, but the essence of a functional protein is not its repetitive parts but in its aperiodic parts. We’ve seen this requirement in other types of intelligent design, such as language. Sure; sometimes a series of dashes makes a nice separator between paragraphs, but you won’t get much meaning out of all repetitive sequences. Let’s see if they can do it:

The different families of repeat proteins are very different in detailed structure but within each family the repeats are topologically nearly identical. These observations suggest that repeat proteins arose through repeated duplication at an early stage in the evolution of larger proteins from smaller fragments. Available examples show that globular organization can arise from continued repetitive growth that closes the linear geometry, and by the fusion of nonidentical units, and so would carry forward their foldon-like properties.

The utility of foldons for the efficient folding of proteins might be seen as a dominant cause for the development and retention of a foldon-based body plan through protein evolution. In this view, contemporary proteins came so consistently to their modular foldon-based design and their foldon-based folding strategy because these linked characteristics coevolved. However, the fact that many known foldons bring together sequentially remote segments requires, at the least, some additional mechanism.
This sounds like the evolutionary story that duplicated genes became seeds of new genes. So if we duplicate the line of dashes, and then change some of the dashes to commas, will we get somewhere? Hardly. If we strip out the “mights” and “maybes” of their story, not much is left but the concluding admission that “some additional mechanism” is needed to get folded proteins. (We have one! Intelligence!) And get this: even if you get a polypeptide to fold into a globule, it’s trash unless it actually performs a function.

When scientific materialists began tackling the protein folding problem, they expected that biased energy landscapes leading to deterministic folds would soon be discovered. That didn’t happen.

However, how this propensity might be encoded in the physical chemistry of protein structure has never been discovered. One simply asserts the general proposition that it is encoded in the shape of the landscape and to an ad hoc principle named minimal frustration imposed by natural evolution.
Here they state Axe’s search challenge in their own words:

Quantitative evaluation described above shows that individual residue — residue interaction energies are inadequate for selecting native-like interactions in competition with the large number of competing nonnative alternatives. The assertion that the needed degree of energetic bias is supplied by the shape of an indefinite energy landscape because nature has made it so is — plainly said — not a useful physical — chemical explanation.
The foldon proposal that Englander and Mayne prefer, however, is not any better, despite their praise for it:

The question is what kind of conformational searching can explain the processes and pathways that carry unfolded proteins to their native state. The foldon-dependent defined-pathway model directly answers each of these challenges.
All they have done, however, is displace the challenges from amino acid sequences to foldon sequences. Since the foldons are composed of amino acid sequences, however, nothing is solved; it is still radically improbable to arrive at a sequence that will produce a functional protein without design. No amount of evolutionary handwaving changes that:

Evolutionary considerations credibly tie together the early codevelopment of foldon-based equilibrium structure and foldon-based kinetic folding.
So much for empirical rigor. Evolution did it. Problem solved.


We think not. To rub it in, consider that Axe’s calculation of one in 10^74 sequences being functional is way too generous. If we require that the amino acids be left-handed, and demand that all bonds be peptide bonds, the probability drops to one in 10^164. For a quick demonstration of why this is hoping against all hope, watch Illustra Media’s clever animation from their film Origin, titled, The Amoeba’s Journey.”

Darwinism's quest for a free lunch rolls on.

Free Energy and the Origin of Life: Natural Engines to the Rescue
Brian Miller


In previous articles, I outlined the thermodynamic challenges to the origin of life  and attempts to address them by evoking self-organizing processes. Now, I will address attempts to overcome the free-energy barriers through the use of natural engines. To summarize, a fundamental hurdle facing all origin-of-life theories is the fact that the first cell must have had a free energy far greater than its chemical precursors. And spontaneous processes always move from  higher free energy to lower free energy.  More specifically, the origin of life required basic chemicals to coalesce into a state of both lower entropy and higher energy, and no such transitions ever occur without outside help in any situation, even at the microscopic level.

Attempted solutions involving external energy sources fail since the input of raw energy actually increases the entropy of the system, moving it in the wrong direction. This challenge also applies to all appeals to  self-replicating molecules, auto-catalytic chemical systems, and self-organization. Since all of these processes proceed spontaneously, they all move from higher to lower free energy, much like rocks rolling down a mountain. However, life resides at the top of the mountain. The only possible solutions must assume the existence of machinery that processes energy and directs it toward performing the required work to properly organize and maintain the first cell.

Modern cells perform these tasks using a host of molecular assemblies, such as ATP synthase and chloroplasts. Ancient cells may not have used these tools, but they had to possess some analogous ones that could extract free energy from such sources as high-energy chemicals, heat, or sunlight. The problem is that this machinery could only be assembled in cells that had such machinery already in full operation. But, no such machinery on the early earth could have existed.

Recognizing this problem, many origins researchers have proposed the existence of naturally occurring settings that effectively functioned as  thermodynamic engines (cycles) or their close equivalent. Proposed systems drive a constantly repeating cyclical that includes three basic components:

Energy and/or material is collected from an outside source.
Energy and/or material is released into the surrounding environment.
Energy is extracted from the flow of energy and matter through the system and redirected toward driving chemical reactions or physical processes that advance the formation of the first cell.
A prime example is the  proposal by geologist Anthonie Muller that thermal cycling generated  ATP molecules, which are a primary source of energy for cellular metabolism. Muller argues that volcanic hot springs heated nearby water which drove a convection cycle with heated water moving away from the spring, then cooling, and then reentering the region near the spring to reheat. The water fortuitously contained ADP molecules, phosphate, and an enzyme (pF1) which combines the ADP and phosphate to form ATP. The thermal cycle synchronized with the enzyme/reaction cycle as follows (components from the thermal cycle described above are labeled):

The pF1 enzyme bound to the ADP and to the phosphate, and then the enzyme folded to chemically bond the two molecules together to form ATP. This reaction moves toward higher free energy, so it would not normally occur spontaneously. However, the folding of the enzyme provides the needed energy (Component 3).
The conformational change of the enzyme gives off heat in the process (Component 2).
The bound complex of the ATP and the enzyme enter the heated region near the hot spring. The heat causes the enzyme to unfold and release the ATP, and in the process of unfolding the enzyme absorbs heat (Component 1). The enzyme is again able to bind to ADP and phosphate, thus restarting the cycle.
The net result is that energy is extracted from the heat flow and redirected toward the production of ATP. The ATP could then provide the needed free energy to organize the first cell.

This scenario, however, has many obvious problems. First, the abiotic production of ADP would have been in extremely small quantities, if anything, due to the challenges of producing its key components, particularly  adenine and ribose,and then linking all of the molecules together properly. Next, the existence of any long amino acid chains is highly unlikely near a hot springso the needed enzyme would not have existed. Even if such chains were in abundance, the chances of the amino acids stumbling across the proper sequence to form the correct 3D structure to drive the ATP reaction are next to nil.

Even if all of these problems are ignored, thermal cycling would still not prove a viable source of energy. The existence of ATP does nothing to help promote life unless the energy released by ATP breaking down into ADP and phosphate could be coupled directly to useful reactions, such as combining amino acids into chains. However, such coupling is only possible if aided by information-rich enzymes with the precise structure to bind to the correct molecules associated with the target reactions. For the reasons mentioned above, no such enzymes would have existed.

Another scenario is advanced by biochemist Nick Lane and geochemist Michael Russell In their proposal, alkaline hydrothermal vents in acidic oceans could have served as the incubators for life. Their theory is that some membrane-like film formed on the surface of a vent, and a proton gradient (difference in concentration) formed between the acidic outside ocean and the basic interior. Protons would have then transported across the membrane (Component 1 of a thermodynamic cycle) through some crevice or micro-pore, which happened to have a ready supply of catalysts such as iron-sulfur minerals, and then exiting into the vent’s interior (Component 2). The catalysts could then have driven chemical reactions that accessed energy from the proton gradient to build cellular structures and drive a primitive cellular metabolism (Component 3). This process would mimic the modern cell’s ability to access the energy from proton gradients across its membrane using machinery such as ATP synthase. Eventually, a fully functional cell would emerge with its own suite of protein enzymes and the ability to create proton gradients and harvest their energy.

To call this scenario unlikely would be generous. It faces all of the challenges of the previous theory plus the implausibility of random chemical catalysts driving the precise reactions needed for life. Origins researchers will undoubtedly come up with many further creative stories of how natural processes could access energy and how life could form in general. However, they will all face the same basic problems:

Natural Tendencies: The natural tendencies of organic chemical reactions are to move in directions contrary to those needed for the origin of life. For instance, smaller organic chemicals are favored over the larger ones needed for cellular structures. When larger ones do form, they tend toward biologically inert tars. Similarly, chains of life’s building blocks tend to break apart, not grow longer.
Specificity: Countless molecules could form through innumerable chemical pathways. Life requires that a highly specific set are selected and others are avoided. Such selectively requires a precise set of enzymes that each contain highly specified amino acid sequences. A membrane must also form that has a highly specified structure to allow the right materials in and out.
Choreography: Any scenario requires many actions to take place in a highly specific order, in the right locations, and in the right ways. Life’s building blocks must be formed in their own special environments with the correct initial conditions. After they form, they then need to migrate at the right times to the right locations with a proper collection of other molecules to assist in the next stage of development. (See Shapiro’s Origins.)
Efficiency: All proposed makeshift scenarios for energy production are highly inefficient. They would be fortunate to access miniscule amounts of useful energy over extended periods of time. In contrast, bacteria can form billions of high-energy molecules every hour. Their overall energy production when scaled is comparable to that of a high-performance sports car. No natural process could reach the required efficiencies.
Localization: The energy production must be localized inside a cell membrane. No imaginable process could scale down anything like thermal cycling or protein gradient production to fit inside such a small, enclosed volume.

As science advances, the need for intelligent direction becomes increasingly clear. The more successful experiments are at generating the products of life, the greater the need for investigator intervention and the more highly specified the required initial conditions and experimental protocols. This trend will only continue until researchers honestly acknowledge the evidence for design that stares them in the face.

Thursday, 22 June 2017

On junk science re:junk DNA.

Jonathan Wells: Zombie Science Keeps Pushing Junk DNA Myth
David Klinghoffer | @d_klinghoffer

The idea that a vast majority of our DNA is “junk,” an evolutionary relic, was just what evolutionists expected. It made sense. Darwin advocates such as Jerry Coyne and Francis Collins advanced it as proof for their claims. Alas for them, it turned out not to be true.

In a video conversation,  Zombie Science  author Jonathan Wells explains how the “Junk DNA” narrative was overturned by good science, including but far from limited to the ENCODE project. Did evolutionary diehards accept this? No! See it here:





If you follow the scientific literature, new functions for “junk” turn up on an almost weekly basis. But the diehards keep insisting on the myth. They strenuously resist a growing body of evidence. Why? Because as Dr. Wells clarifies, evolution for them is not an ordinary scientific theory. It’s a fixed idea. It is an ideology that must be true “no matter what.”

So how evidence is interpreted is wrenched into line with the ideology. And this is what we mean by “zombie science.” Watch and enjoy.

Yet more on the chasm between life and everything else.

“Life Is a Discontinuity in the Universe”
David Klinghoffer | @d_klinghoffer


In a really excellent new ID the Future episode with Todd Butterfield, Steve Laufmann puts the engineering challenge to gradualist evolutionary schemes about as powerfully as one could do. An enterprise architecture consultant, he is a most gifted and entertaining explainer.

There are 37 trillion cells in the human body, some 200 cell types, and 12,000+ specialized proteins. How does it all come together? In human ontogenesis, a 9-month process “turns a zygote into what I call a tax deduction,” says Laufmann. Building a system like this that “leaps together at the same time to create us” (as Butterfield puts it) is the most stunning engineering feat ever accomplished as far as we know.

The discussion features one memorable phrasing after another. “Life is a discontinuity in the universe,” and explaining it means explaining the property of “coherence” associated with engineered systems. Darwinian theory proposes that this was accomplished through random changes gradually accumulating. That entails maintaining “an adaptive continuum” of life where “any causal mechanism that’s proposed has to be able to produce all the changes for every discrete step within one generation.” In this way, unguided evolution could accomplish trivial changes – on the order of skin color, the shape of the nose or the earlobe – but “basics” (how a spleen functions, for example) are quite outside the range.

For the Darwin proponent, it looks hopeless. Laufmann: “Random changes only make the impossible even more impossible. It’s like the impossible squared. It just can’t happen.”

Taking all of this together, what you expect, rather than gradual change as evolutionists picture it, is sudden explosions of complexity. And this is just what the fossil record shows.

It’s a wonderful and enlightening conversation, demonstrating again the necessity of introducing the engineer’s perspective in any realistic estimation of how evolution could work. Darwin proponents almost never seem to consider these challenges. Listen to the podcast here, or download it here.

Tuesday, 20 June 2017

Are 'orphaned genes' a thing?

A Reader Asks, "Are De Novo Genes Real?"
Ann Gauger 

We get good questions here at Evolution News. (Give us yours by hitting the orange Email Us button at the top of the page.) Today, a reader writes to ask, "Are de novo genes real?" This is a question that touches on a number of topics relevant to evolutionary biology, dealing with one of the most exciting aspects of genomic research today. So what are these things called de novo genes?

De novo genes are genes that are present in a particular species or taxonomic group, and not present in any others. Why are they there and where did they come from? To answer these questions we have to first deal with some important assumptions of evolutionary biology.

The first assumption is that sibling species are the product of descent with modification. The evidence cited in favor of this idea is that there is similarity of DNA sequence between sibling species, and that organisms can be grouped in nested hierarchies based on sequence comparisons. Now this hypothesis of common descent may be right. However, there are unresolved contradictions in the literature. So common descent is not unequivocally proven. De novo genes are one of those challenges to common descent. Let me explain why.

De novo genes, new genes present in one taxonomic group but not in others, are sometimes called orphan genes because they have no parent genes. They are also called taxonomically restricted genes (TRGs), because they may be shared by closely related species of the same taxon, but not others. What's a taxon? It's a level of classification, such as species, genus, family, order, class or phylum. Species of the same genus, for example, may share genes in common that are missing from all other species.

Because the field of research is still developing, different research groups use different criteria for deciding what counts as a TRG. For example, one recent estimate says that there are 634 genes that appear to have arisen de novo in the human genome, as compared with the chimpanzee and macaque genomes. But they counted RNA transcripts as genes, even if they have not yet been shown to code for protein. Another older estimate of over a thousand transcripts was finally reduced to a much lower number of de novo genes, because the researchers ruled out almost all of those candidate genes as non-protein coding. For a discussion about why this is, go here.

Despite these disagreements, de novo genes do exist. But when their origin -- where they came from -- is discussed, it reveals yet another assumption of evolutionary biologists. Evolutionists say, "Look, these orphan genes arose de novo. We can see how they might have been spliced together from similar DNA present elsewhere in the genome, or they might have come from non-coding DNA that has acquired a promoter or transcription factor binding site, and so is now expressed, and makes a functional protein, in the right place and at the right time."

These sentences reveal the second assumption -- that the existence of these new genes indicates there are natural processes to make them. After all, it must be possible to splice or activate new sequences to make TRGs, because there are TRGs.

That's an assumption of naturalism. The problem is there is no evidence to show that those proposed mechanisms actually work. There are no experiments that I know of to demonstrate that splicing yields functional products. Attempts in the lab show that splicing together even related protein domains yields non-functional products. Also, no one has shown that it is easy to acquire a promoter or transcription factor binding site so as to turn inactive, non-coding DNA into expressed, functional DNA. Getting a functional protein from random non-coding sequence is impossibly hard and would have to be demonstrated. If the function is regulating other genes via RNA, that would have to be proven to be feasible, too.

So do we know where TRGs came from? If no one tests how hard it is to splice together random sequence and get functional stuff, or how hard it is to acquire a new promoter, then we don't know whether de novo genes can be developed by evolutionary processes. If not, the alternative is shocking to evolutionary biologists -- perhaps, just perhaps they were made by a designer for that particular species or group. Perhaps the non-coding DNA was already ready to be functional, like an actor waiting in the wings for his cue, and was only activated in that one particular taxonomic group.

Bear in mind that TRGs can be up to 10-20 percent of a taxonomic group's genome, and may encode many of the special proteins unique to that taxonomic group. That's a huge chunk of DNA to arise by natural processes alone, and a big challenge for common descent. I am thinking of the phylum Cnidaria here. All Cnidaria (sea anemones, jelly fish, and Hydra for example) have tentacles with specialized cells called cnidocytes or nematocysts, which eject a little barbed tubule with a toxin into whatever touches them. They use these cells to capture and immobilize their prey. Many of the specialized proteins needed to make the nematocysts are TRGs specific to the phylum Cnidaria. Cnidaria are among the oldest of all extant phyla. Was their origin unique?

Take home lesson: Are de novo genes real? Yes. Do we know where they came from? No. Do they say something important about evolutionary processes? Indeed. But what they say remains to be seen.

Between physics and abiogenesis an unbridgeable chasm?

The Origin of Life, Self-Organization, and Information
Brian Miller

In an  article here yesterday, I described the thermodynamic challenges to any purely materialistic theory for the origin of life. Now, I will address one of the most popular and misunderstood claims that the first cell emerged through a process that demonstrated the property known as self-organization.

As I mentioned in the previous article, origin-of-life researchers often argue that life developed in an environment that was driven far from equilibrium, often referred to as a non-equilibrium dissipative system. In such systems, energy and/or mass constantly enters and leaves, and this flow spontaneously generates “order” such as the roll patterns in boiling water, the funnel of a tornado, or wave patterns in the Belousov-Zhabotinsky reaction. The assertion is that some analogous type of self-organizational process could have created the order in the first cell. Such claims sound reasonable at first, but they completely break down when the differences between self-organizational order and cellular order are examined in detail. Instead, the origin of life required complex cellular machinery and preexisting sources of information.

The main reason for the differences between self-organizational and cellular order is that the driving tendencies in non-equilibrium systems move in the opposite direction to what is needed for both the origin and maintenance of life. First, all realistic experiments on the genesis of life’s building blocks produce most of the needed molecules in very small concentrations, if at all. And, they are mixed together with  contaminants, which would hinder the next stages of cell formation. Nature would have needed to spontaneously concentrate and purify life’s precursors. However, the natural tendency would have been for them to diffuse and to mix with other chemicals, particularly in such environments as the bottom of the ocean.

Concentration of some of life’s precursors could have taken place in an evaporating pool, but the contamination problem would then become much worse since precursors would be greatly outnumbered by contaminants. Moreover, the next stages of forming a cell would require the concentrated chemicals to dissolve back into some larger body of water, since different precursors would have had to form in different locations with starkly different initial conditions. In  his  book on Origins, Robert Shapiro described these details in relation to the exquisite orchestration required to produce life.

In addition, many of life’s building blocks come in both right and left-handed versions, which are mirror opposites. Both forms are produced in all realistic experiments in equal proportions, but life can only use one of them: in today’s life, left-handed amino acids and right-handed sugars. The  origin of life would have required one form to become increasingly dominant, but nature would drive a mixture of the two forms toward equal percentages, the opposite direction. As a related but more general challenge, all spontaneous chemical reactions move downhill toward lower free energy. However, a large portion of the needed reactions in the origin and maintenance of life move uphill toward higher free energy. Even those that move downhill typically proceed too slowly to be useful. Nature would have had to reverse most of its natural tendencies in any scenario for extended periods of time. Scientists have never observed any such event at any time in the history of the universe.

These challenges taken together help clarify the dramatic differences between the two types of order:

Self-organizational processes create order (i.e. funnel cloud) at the macroscopic (visible) level, but they generate entropy at the microscopic level. In contrast, life requires the entropy at the cellular size scale to decrease.
Self-organizational patterns are driven by processes which move toward lower free energy. Many processes which generate cellular order move toward higher free energy.
Self-organizational order is dynamic — material is in motion and the patterns are changing over time. The cellular order is static — molecules are in fixed configurations, such as the sequence of nucleotides in DNA or the structure of cellular machines.
Self-organizational order is driven by natural laws. The order in cells represents specified complexity — molecules take on highly improbable arrangements which are not the product of natural processes but instead are arranged to achieve functional goals.
These differences demonstrate that self-organizational processes could not have produced the order in the first cell. Instead, cellular order required molecular machinery to process energy from outside sources and to store it in easily accessible repositories. And, it needed information to direct the use of that energy toward properly organizing and maintaining the cell.

A simple analogy will demonstrate why machinery and information were essential. Scientists often claim that any ancient energy source could have provided the needed free energy to generate life. However, this claim is like a couple returning home from a long vacation to find that their children left their house in complete disarray, with clothes on the floor, unwashed dishes in the sink, and papers scattered across all of the desks. The couple recently heard an origin-of-life researcher claim that order could be produced for free from any generic source of energy. Based on this idea, they pour gasoline on their furniture and then set it on fire. They assume that the energy released from the fire will organize their house. However, they soon realize that unprocessed energy creates an even greater mess.

Based on this experience, the couple instead purchase a solar powered robot. The solar cells process the energy from the sun and convert it into useful work. But, to the couple’s disappointment the robot then starts throwing objects in all directions. They look more closely at the owner’s manual and realize they need to program the robot with instructions on how to perform the desired tasks to properly clean up the house.

In the same way, the simplest cell required machinery, such as some ancient equivalent to ATP synthase or chloroplasts, to process basic chemicals or sunlight. It also needed proteins with the proper information contained in their amino acid sequences to fold into other essential cellular structures, such as portals in the cell membrane. And, it needed proteins with the proper sequences to fold into enzymes to drive the metabolism. A key role of the enzymes is to  link reactions moving toward lower free energy (e.g. ATP → ADP + P) to reactions, such as combining amino acids into long chains, which go uphill. The energy from the former can then be used to drive the latter, since the net change in free energy is negative. The free-energy barrier is thus overcome.

However, the energy-processing machinery and information-rich proteins were still not enough. Proteins eventually break down, and they cannot self-replicate. Additional machinery was also needed to constantly produce new protein replacements. Also, the proteins’ sequence information had to have been stored in DNA using some  genetic code, where each amino acid was represented by a series of three nucleotides know as a codon in the same way English letters are represented in Morse Code by dots and dashes. However,  no identifiable physical connection exists between individual amino acids and their respective codons. In particular, no amino acid (e.g., valine) is much more strongly attracted to any particular codon (e.g., GTT) than to any other.  Without such a physical connection, no purely materialistic process could plausibly explain how amino acid sequences were encoded into DNA. Therefore, the same information in proteins and in DNA must have been encoded separately.

In addition, the information in  DNA is decoded back into proteins  through the use of ribosomes, tRNAs, and special enzymes called aminoacyl tRNA sythetases (aaRS). The aaRSs bind the correct amino acids to the correct tRNAs associated with the correct codons, so these enzymes contain the decoding key in their 3D structures. All life uses this same process, so the first cell almost certainly functioned similarly. However, no possible connection could exist between the encoding and the decoding processes, since the aaRSs’ structures are a result of their amino acid sequences, which happen to be part of the information encoded in the DNA. Therefore, the decoding had to have developed independently of the encoding, but they had to use the same code. And, they had to originate at the same time, since each is useless without the other.


All of these facts indicate that the code and the sequence information in proteins/DNA preexisted the original cell. And, the only place that they could exist outside of a physical medium is in a mind, which points to design.

Monday, 19 June 2017

Actually,it is rocket science.

Rocket Science in a Microbe Saves the Planet
Evolution News & Views

Anammox. It's a good term to learn. Wikipedia's first paragraph stresses its importance:

Anammox, an abbreviation for ANaerobic AMMonium OXidation, is a globally important microbial process of the nitrogen cycle. The bacteria mediating this process were identified in 1999, and at the time were a great surprise for the scientific community. It takes place in many natural environments... [Emphasis added.]

And now, the news. A team of European scientists found something very interesting about the bacteria. Publishing in Nature, the researchers tell how they have ascertained the structure of a molecular machine that performs chemical wizardry using rocket science.

Anaerobic ammonium oxidation (anammox) has a major role in the Earth's nitrogen cycle and is used in energy-efficient wastewater treatment. This bacterial process combines nitrite and ammonium to form dinitrogen (N2) gas, and has been estimated to synthesize up to 50% of the dinitrogen gas emitted into our atmosphere from the oceans. Strikingly, the anammox process relies on the highly unusual, extremely reactive intermediate hydrazine, a compound also used as a rocket fuel because of its high reducing power. So far, the enzymatic mechanism by which hydrazine is synthesized is unknown. Here we report the 2.7 Å resolution crystal structure, as well as biophysical and spectroscopic studies, of a hydrazine synthase multiprotein complex isolated from the anammox organism Kuenenia stuttgartiensis. The structure shows an elongated dimer of heterotrimers, each of which has two unique c-type haem-containing active sites, as well as an interaction point for a redox partner. Furthermore, a system of tunnels connects these active sites. The crystal structure implies a two-step mechanism for hydrazine synthesis: a three-electron reduction of nitric oxide to hydroxylamine at the active site of the γ-subunit and its subsequent condensation with ammonia, yielding hydrazine in the active centre of the α-subunit. Our results provide the first, to our knowledge, detailed structural insight into the mechanism of biological hydrazine synthesis, which is of major significance for our understanding of the conversion of nitrogenous compounds in nature.

Dinitrogen gas (N2) is a tough nut to crack. The atoms pair up with a triple bond, very difficult for humans to break without a lot of heat and pressure. Fortunately, this makes it very inert for the atmosphere, but life needs to get at it to make amino acids, muscles, organs, and more. Nitrogenase enzymes in some microbes, such as soil bacteria, are able break apart the atoms at ambient temperatures (a secret agricultural chemists would love to learn). They then "fix" nitrogen into compounds such as ammonia (NH3) that can be utilized by plants and the animals that eat them. To have a nitrogen cycle, though, something has to return the N2 gas back to the atmosphere. That's the job of anammox bacteria.

Most nitrogen on earth occurs as gaseous N2 (nitrogen oxidation number 0). To make nitrogen available for biochemical reactions, the inert N2 has to be converted to ammonia (oxidation number −III), which can then be assimilated to produce organic nitrogen compounds, or be oxidized to nitrite (oxidation number +III) or nitrate (+V). The reduction of nitrite in turn results in the regeneration of N2, thus closing the biological nitrogen cycle.

Let's take a look at the enzyme that does this, the "hydrazine synthase multiprotein complex." Rocket fuel; imagine! No wonder the scientific community was surprised. The formula for hydrazine is N2H4. It's commonly used to power thrusters on spacecraft, such as the Cassini Saturn orbiter and the New Horizons probe that went by Pluto recently. Obviously, the anammox bacteria must handle this highly reactive compound with great care. Here's their overview of the reaction sequence. Notice how the bacterium gets some added benefit from its chemistry lab:

Our current understanding of the anammox reaction (equation (1)) is based on genomic, physiological and biochemical studies on the anammox bacterium K. stuttgartiensis. First, nitrite is reduced to nitric oxide (NO, equation (2)), which is then condensed with ammonium-derived ammonia (NH3) to yield hydrazine (N2H4, equation (3)). Hydrazine itself is a highly unusual metabolic intermediate, as it is extremely reactive and therefore toxic, and has a very low redox potential (E0′ = −750 mV). In the final step in the anammox process, it is oxidized to N2, yielding four electrons (equation (4)) that replenish those needed for nitrite reduction and hydrazine synthesis and are used to establish a proton-motive force across the membrane of the anammox organelle, the anammoxosome, driving ATP synthesis.

We've discussed ATP synthase before. It's that rotary engine in all life that runs on proton motive force. Here, we see that some of the protons needed for ATP synthesis come from the hydrazine reaction machine. Cool!

What does the anammox enzyme look like? They say it has tunnels between the active sites. The "hydrazine synthase" module is "biochemically unique." Don't look for a common ancestor, in other words. It's part of a "tightly coupled multicomponent system" they determined when they lysed a cell and watched its reactivity plummet. Sounds like an irreducibly complex system.

The paper's diagrams of hydrazine synthase (HZS) show multiple protein domains joined in a "crescent-shaped dimer of heterotrimers" labeled alpha, beta, and gamma, constituted in pairs. The machine also contains multiple haem units (like those in hemoglobin, but unique) and "one zinc ion, as well as several calcium ions." Good thing those atoms are available in Earth's crust.

Part of the machine looks like a six-bladed propeller. Another part has seven blades. How does it work? Everything is coordinated to carefully transfer electrons around. This means that charge distributions are highly controlled for redox (reduction-oxidation) reactions (i.e., those that receive or donate electrons). The choice of adverbs shows that their eyes were lighting up at their first view of this amazing machine. Note how emotion seasons the jargon:

Intriguingly, our crystal structure revealed a tunnel connecting the haem αI and γI sites (Fig. 3a). This tunnel branches off towards the surface of the protein approximately halfway between the haem sites, making them accessible to substrates from the solvent. Indeed, binding studies show that haem αI is accessible to xenon (Extended Data Fig. 4c). Interestingly, in-between the α- and γ-subunits, the tunnel is approached by a 15-amino-acid-long loop of the β-subunit (β245-260), placing the conserved βGlu253, which binds a magnesium ion, into the tunnel.

We would need to make another animation to show the machine in action, but here's a brief description of how it works. The two active sites, connected by a tunnel, appear to work in sequence. HZS gets electrons from cytochrome c, a well-known enzyme. The electrons enter the machine through one of the haem units, where a specifically-placed gamma unit adds protons. A "cluster of buried polar residues" transfers protons to the active center of the gamma subunit. A molecule named hydroxylamine (H3NO) diffuses into the active site, assisted by the beta subunit. It binds to another haem, which carefully positions it so that it is "bound in a tight, very hydrophobic pocket, so that there is little electrostatic shielding of the partial positive charge on the nitrogen." Ammonia then comes in to do a "nucleophilic attack" on the nitrogen of the molecule, yielding hydrazine. The hydrazine is then in position to escape via the tunnel branch leading to the surface. Once they determined this sequence, a light went on:

Interestingly, the proposed scheme is analogous to the Raschig process used in industrial hydrazine synthesis. There, ammonia is oxidized to chloramine (NH2Cl, nitrogen oxidation number −I, like in hydroxylamine), which then undergoes comproportionation with another molecule of ammonia to yield hydrazine.

(But that, we all know, is done by intelligent design.)


So here's something you can meditate on when you take in another breath. The nitrogen gas that comes into your lungs is a byproduct of an exquisitely designed, precision nanomachine that knows a lot about organic redox chemistry and safe handling of rocket fuel. This little machine, which also knows how to recycle and reuse all its parts in a sustainable "green" way, keeps the nitrogen in balance for the whole planet. Intriguing. Interesting. As Mr. Spock might say, fascinating.

Saturday, 17 June 2017

Why the quest to reduce biology to chemistry is doomed.

The White Space in Evolutionary Thinking