Search This Blog

Saturday 24 June 2017

Darwinism's quest for a free lunch rolls on.

Free Energy and the Origin of Life: Natural Engines to the Rescue
Brian Miller


In previous articles, I outlined the thermodynamic challenges to the origin of life  and attempts to address them by evoking self-organizing processes. Now, I will address attempts to overcome the free-energy barriers through the use of natural engines. To summarize, a fundamental hurdle facing all origin-of-life theories is the fact that the first cell must have had a free energy far greater than its chemical precursors. And spontaneous processes always move from  higher free energy to lower free energy.  More specifically, the origin of life required basic chemicals to coalesce into a state of both lower entropy and higher energy, and no such transitions ever occur without outside help in any situation, even at the microscopic level.

Attempted solutions involving external energy sources fail since the input of raw energy actually increases the entropy of the system, moving it in the wrong direction. This challenge also applies to all appeals to  self-replicating molecules, auto-catalytic chemical systems, and self-organization. Since all of these processes proceed spontaneously, they all move from higher to lower free energy, much like rocks rolling down a mountain. However, life resides at the top of the mountain. The only possible solutions must assume the existence of machinery that processes energy and directs it toward performing the required work to properly organize and maintain the first cell.

Modern cells perform these tasks using a host of molecular assemblies, such as ATP synthase and chloroplasts. Ancient cells may not have used these tools, but they had to possess some analogous ones that could extract free energy from such sources as high-energy chemicals, heat, or sunlight. The problem is that this machinery could only be assembled in cells that had such machinery already in full operation. But, no such machinery on the early earth could have existed.

Recognizing this problem, many origins researchers have proposed the existence of naturally occurring settings that effectively functioned as  thermodynamic engines (cycles) or their close equivalent. Proposed systems drive a constantly repeating cyclical that includes three basic components:

Energy and/or material is collected from an outside source.
Energy and/or material is released into the surrounding environment.
Energy is extracted from the flow of energy and matter through the system and redirected toward driving chemical reactions or physical processes that advance the formation of the first cell.
A prime example is the  proposal by geologist Anthonie Muller that thermal cycling generated  ATP molecules, which are a primary source of energy for cellular metabolism. Muller argues that volcanic hot springs heated nearby water which drove a convection cycle with heated water moving away from the spring, then cooling, and then reentering the region near the spring to reheat. The water fortuitously contained ADP molecules, phosphate, and an enzyme (pF1) which combines the ADP and phosphate to form ATP. The thermal cycle synchronized with the enzyme/reaction cycle as follows (components from the thermal cycle described above are labeled):

The pF1 enzyme bound to the ADP and to the phosphate, and then the enzyme folded to chemically bond the two molecules together to form ATP. This reaction moves toward higher free energy, so it would not normally occur spontaneously. However, the folding of the enzyme provides the needed energy (Component 3).
The conformational change of the enzyme gives off heat in the process (Component 2).
The bound complex of the ATP and the enzyme enter the heated region near the hot spring. The heat causes the enzyme to unfold and release the ATP, and in the process of unfolding the enzyme absorbs heat (Component 1). The enzyme is again able to bind to ADP and phosphate, thus restarting the cycle.
The net result is that energy is extracted from the heat flow and redirected toward the production of ATP. The ATP could then provide the needed free energy to organize the first cell.

This scenario, however, has many obvious problems. First, the abiotic production of ADP would have been in extremely small quantities, if anything, due to the challenges of producing its key components, particularly  adenine and ribose,and then linking all of the molecules together properly. Next, the existence of any long amino acid chains is highly unlikely near a hot springso the needed enzyme would not have existed. Even if such chains were in abundance, the chances of the amino acids stumbling across the proper sequence to form the correct 3D structure to drive the ATP reaction are next to nil.

Even if all of these problems are ignored, thermal cycling would still not prove a viable source of energy. The existence of ATP does nothing to help promote life unless the energy released by ATP breaking down into ADP and phosphate could be coupled directly to useful reactions, such as combining amino acids into chains. However, such coupling is only possible if aided by information-rich enzymes with the precise structure to bind to the correct molecules associated with the target reactions. For the reasons mentioned above, no such enzymes would have existed.

Another scenario is advanced by biochemist Nick Lane and geochemist Michael Russell In their proposal, alkaline hydrothermal vents in acidic oceans could have served as the incubators for life. Their theory is that some membrane-like film formed on the surface of a vent, and a proton gradient (difference in concentration) formed between the acidic outside ocean and the basic interior. Protons would have then transported across the membrane (Component 1 of a thermodynamic cycle) through some crevice or micro-pore, which happened to have a ready supply of catalysts such as iron-sulfur minerals, and then exiting into the vent’s interior (Component 2). The catalysts could then have driven chemical reactions that accessed energy from the proton gradient to build cellular structures and drive a primitive cellular metabolism (Component 3). This process would mimic the modern cell’s ability to access the energy from proton gradients across its membrane using machinery such as ATP synthase. Eventually, a fully functional cell would emerge with its own suite of protein enzymes and the ability to create proton gradients and harvest their energy.

To call this scenario unlikely would be generous. It faces all of the challenges of the previous theory plus the implausibility of random chemical catalysts driving the precise reactions needed for life. Origins researchers will undoubtedly come up with many further creative stories of how natural processes could access energy and how life could form in general. However, they will all face the same basic problems:

Natural Tendencies: The natural tendencies of organic chemical reactions are to move in directions contrary to those needed for the origin of life. For instance, smaller organic chemicals are favored over the larger ones needed for cellular structures. When larger ones do form, they tend toward biologically inert tars. Similarly, chains of life’s building blocks tend to break apart, not grow longer.
Specificity: Countless molecules could form through innumerable chemical pathways. Life requires that a highly specific set are selected and others are avoided. Such selectively requires a precise set of enzymes that each contain highly specified amino acid sequences. A membrane must also form that has a highly specified structure to allow the right materials in and out.
Choreography: Any scenario requires many actions to take place in a highly specific order, in the right locations, and in the right ways. Life’s building blocks must be formed in their own special environments with the correct initial conditions. After they form, they then need to migrate at the right times to the right locations with a proper collection of other molecules to assist in the next stage of development. (See Shapiro’s Origins.)
Efficiency: All proposed makeshift scenarios for energy production are highly inefficient. They would be fortunate to access miniscule amounts of useful energy over extended periods of time. In contrast, bacteria can form billions of high-energy molecules every hour. Their overall energy production when scaled is comparable to that of a high-performance sports car. No natural process could reach the required efficiencies.
Localization: The energy production must be localized inside a cell membrane. No imaginable process could scale down anything like thermal cycling or protein gradient production to fit inside such a small, enclosed volume.

As science advances, the need for intelligent direction becomes increasingly clear. The more successful experiments are at generating the products of life, the greater the need for investigator intervention and the more highly specified the required initial conditions and experimental protocols. This trend will only continue until researchers honestly acknowledge the evidence for design that stares them in the face.

Thursday 22 June 2017

Russia continues to disgrace itself re:religious liberty.

On junk science re:junk DNA.

Jonathan Wells: Zombie Science Keeps Pushing Junk DNA Myth
David Klinghoffer | @d_klinghoffer

The idea that a vast majority of our DNA is “junk,” an evolutionary relic, was just what evolutionists expected. It made sense. Darwin advocates such as Jerry Coyne and Francis Collins advanced it as proof for their claims. Alas for them, it turned out not to be true.

In a video conversation,  Zombie Science  author Jonathan Wells explains how the “Junk DNA” narrative was overturned by good science, including but far from limited to the ENCODE project. Did evolutionary diehards accept this? No! See it here:





If you follow the scientific literature, new functions for “junk” turn up on an almost weekly basis. But the diehards keep insisting on the myth. They strenuously resist a growing body of evidence. Why? Because as Dr. Wells clarifies, evolution for them is not an ordinary scientific theory. It’s a fixed idea. It is an ideology that must be true “no matter what.”

So how evidence is interpreted is wrenched into line with the ideology. And this is what we mean by “zombie science.” Watch and enjoy.

Yet more on the chasm between life and everything else.

“Life Is a Discontinuity in the Universe”
David Klinghoffer | @d_klinghoffer


In a really excellent new ID the Future episode with Todd Butterfield, Steve Laufmann puts the engineering challenge to gradualist evolutionary schemes about as powerfully as one could do. An enterprise architecture consultant, he is a most gifted and entertaining explainer.

There are 37 trillion cells in the human body, some 200 cell types, and 12,000+ specialized proteins. How does it all come together? In human ontogenesis, a 9-month process “turns a zygote into what I call a tax deduction,” says Laufmann. Building a system like this that “leaps together at the same time to create us” (as Butterfield puts it) is the most stunning engineering feat ever accomplished as far as we know.

The discussion features one memorable phrasing after another. “Life is a discontinuity in the universe,” and explaining it means explaining the property of “coherence” associated with engineered systems. Darwinian theory proposes that this was accomplished through random changes gradually accumulating. That entails maintaining “an adaptive continuum” of life where “any causal mechanism that’s proposed has to be able to produce all the changes for every discrete step within one generation.” In this way, unguided evolution could accomplish trivial changes – on the order of skin color, the shape of the nose or the earlobe – but “basics” (how a spleen functions, for example) are quite outside the range.

For the Darwin proponent, it looks hopeless. Laufmann: “Random changes only make the impossible even more impossible. It’s like the impossible squared. It just can’t happen.”

Taking all of this together, what you expect, rather than gradual change as evolutionists picture it, is sudden explosions of complexity. And this is just what the fossil record shows.

It’s a wonderful and enlightening conversation, demonstrating again the necessity of introducing the engineer’s perspective in any realistic estimation of how evolution could work. Darwin proponents almost never seem to consider these challenges. Listen to the podcast here, or download it here.

Tuesday 20 June 2017

Are 'orphaned genes' a thing?

A Reader Asks, "Are De Novo Genes Real?"
Ann Gauger 

We get good questions here at Evolution News. (Give us yours by hitting the orange Email Us button at the top of the page.) Today, a reader writes to ask, "Are de novo genes real?" This is a question that touches on a number of topics relevant to evolutionary biology, dealing with one of the most exciting aspects of genomic research today. So what are these things called de novo genes?

De novo genes are genes that are present in a particular species or taxonomic group, and not present in any others. Why are they there and where did they come from? To answer these questions we have to first deal with some important assumptions of evolutionary biology.

The first assumption is that sibling species are the product of descent with modification. The evidence cited in favor of this idea is that there is similarity of DNA sequence between sibling species, and that organisms can be grouped in nested hierarchies based on sequence comparisons. Now this hypothesis of common descent may be right. However, there are unresolved contradictions in the literature. So common descent is not unequivocally proven. De novo genes are one of those challenges to common descent. Let me explain why.

De novo genes, new genes present in one taxonomic group but not in others, are sometimes called orphan genes because they have no parent genes. They are also called taxonomically restricted genes (TRGs), because they may be shared by closely related species of the same taxon, but not others. What's a taxon? It's a level of classification, such as species, genus, family, order, class or phylum. Species of the same genus, for example, may share genes in common that are missing from all other species.

Because the field of research is still developing, different research groups use different criteria for deciding what counts as a TRG. For example, one recent estimate says that there are 634 genes that appear to have arisen de novo in the human genome, as compared with the chimpanzee and macaque genomes. But they counted RNA transcripts as genes, even if they have not yet been shown to code for protein. Another older estimate of over a thousand transcripts was finally reduced to a much lower number of de novo genes, because the researchers ruled out almost all of those candidate genes as non-protein coding. For a discussion about why this is, go here.

Despite these disagreements, de novo genes do exist. But when their origin -- where they came from -- is discussed, it reveals yet another assumption of evolutionary biologists. Evolutionists say, "Look, these orphan genes arose de novo. We can see how they might have been spliced together from similar DNA present elsewhere in the genome, or they might have come from non-coding DNA that has acquired a promoter or transcription factor binding site, and so is now expressed, and makes a functional protein, in the right place and at the right time."

These sentences reveal the second assumption -- that the existence of these new genes indicates there are natural processes to make them. After all, it must be possible to splice or activate new sequences to make TRGs, because there are TRGs.

That's an assumption of naturalism. The problem is there is no evidence to show that those proposed mechanisms actually work. There are no experiments that I know of to demonstrate that splicing yields functional products. Attempts in the lab show that splicing together even related protein domains yields non-functional products. Also, no one has shown that it is easy to acquire a promoter or transcription factor binding site so as to turn inactive, non-coding DNA into expressed, functional DNA. Getting a functional protein from random non-coding sequence is impossibly hard and would have to be demonstrated. If the function is regulating other genes via RNA, that would have to be proven to be feasible, too.

So do we know where TRGs came from? If no one tests how hard it is to splice together random sequence and get functional stuff, or how hard it is to acquire a new promoter, then we don't know whether de novo genes can be developed by evolutionary processes. If not, the alternative is shocking to evolutionary biologists -- perhaps, just perhaps they were made by a designer for that particular species or group. Perhaps the non-coding DNA was already ready to be functional, like an actor waiting in the wings for his cue, and was only activated in that one particular taxonomic group.

Bear in mind that TRGs can be up to 10-20 percent of a taxonomic group's genome, and may encode many of the special proteins unique to that taxonomic group. That's a huge chunk of DNA to arise by natural processes alone, and a big challenge for common descent. I am thinking of the phylum Cnidaria here. All Cnidaria (sea anemones, jelly fish, and Hydra for example) have tentacles with specialized cells called cnidocytes or nematocysts, which eject a little barbed tubule with a toxin into whatever touches them. They use these cells to capture and immobilize their prey. Many of the specialized proteins needed to make the nematocysts are TRGs specific to the phylum Cnidaria. Cnidaria are among the oldest of all extant phyla. Was their origin unique?

Take home lesson: Are de novo genes real? Yes. Do we know where they came from? No. Do they say something important about evolutionary processes? Indeed. But what they say remains to be seen.

Between physics and abiogenesis an unbridgeable chasm?

The Origin of Life, Self-Organization, and Information
Brian Miller

In an  article here yesterday, I described the thermodynamic challenges to any purely materialistic theory for the origin of life. Now, I will address one of the most popular and misunderstood claims that the first cell emerged through a process that demonstrated the property known as self-organization.

As I mentioned in the previous article, origin-of-life researchers often argue that life developed in an environment that was driven far from equilibrium, often referred to as a non-equilibrium dissipative system. In such systems, energy and/or mass constantly enters and leaves, and this flow spontaneously generates “order” such as the roll patterns in boiling water, the funnel of a tornado, or wave patterns in the Belousov-Zhabotinsky reaction. The assertion is that some analogous type of self-organizational process could have created the order in the first cell. Such claims sound reasonable at first, but they completely break down when the differences between self-organizational order and cellular order are examined in detail. Instead, the origin of life required complex cellular machinery and preexisting sources of information.

The main reason for the differences between self-organizational and cellular order is that the driving tendencies in non-equilibrium systems move in the opposite direction to what is needed for both the origin and maintenance of life. First, all realistic experiments on the genesis of life’s building blocks produce most of the needed molecules in very small concentrations, if at all. And, they are mixed together with  contaminants, which would hinder the next stages of cell formation. Nature would have needed to spontaneously concentrate and purify life’s precursors. However, the natural tendency would have been for them to diffuse and to mix with other chemicals, particularly in such environments as the bottom of the ocean.

Concentration of some of life’s precursors could have taken place in an evaporating pool, but the contamination problem would then become much worse since precursors would be greatly outnumbered by contaminants. Moreover, the next stages of forming a cell would require the concentrated chemicals to dissolve back into some larger body of water, since different precursors would have had to form in different locations with starkly different initial conditions. In  his  book on Origins, Robert Shapiro described these details in relation to the exquisite orchestration required to produce life.

In addition, many of life’s building blocks come in both right and left-handed versions, which are mirror opposites. Both forms are produced in all realistic experiments in equal proportions, but life can only use one of them: in today’s life, left-handed amino acids and right-handed sugars. The  origin of life would have required one form to become increasingly dominant, but nature would drive a mixture of the two forms toward equal percentages, the opposite direction. As a related but more general challenge, all spontaneous chemical reactions move downhill toward lower free energy. However, a large portion of the needed reactions in the origin and maintenance of life move uphill toward higher free energy. Even those that move downhill typically proceed too slowly to be useful. Nature would have had to reverse most of its natural tendencies in any scenario for extended periods of time. Scientists have never observed any such event at any time in the history of the universe.

These challenges taken together help clarify the dramatic differences between the two types of order:

Self-organizational processes create order (i.e. funnel cloud) at the macroscopic (visible) level, but they generate entropy at the microscopic level. In contrast, life requires the entropy at the cellular size scale to decrease.
Self-organizational patterns are driven by processes which move toward lower free energy. Many processes which generate cellular order move toward higher free energy.
Self-organizational order is dynamic — material is in motion and the patterns are changing over time. The cellular order is static — molecules are in fixed configurations, such as the sequence of nucleotides in DNA or the structure of cellular machines.
Self-organizational order is driven by natural laws. The order in cells represents specified complexity — molecules take on highly improbable arrangements which are not the product of natural processes but instead are arranged to achieve functional goals.
These differences demonstrate that self-organizational processes could not have produced the order in the first cell. Instead, cellular order required molecular machinery to process energy from outside sources and to store it in easily accessible repositories. And, it needed information to direct the use of that energy toward properly organizing and maintaining the cell.

A simple analogy will demonstrate why machinery and information were essential. Scientists often claim that any ancient energy source could have provided the needed free energy to generate life. However, this claim is like a couple returning home from a long vacation to find that their children left their house in complete disarray, with clothes on the floor, unwashed dishes in the sink, and papers scattered across all of the desks. The couple recently heard an origin-of-life researcher claim that order could be produced for free from any generic source of energy. Based on this idea, they pour gasoline on their furniture and then set it on fire. They assume that the energy released from the fire will organize their house. However, they soon realize that unprocessed energy creates an even greater mess.

Based on this experience, the couple instead purchase a solar powered robot. The solar cells process the energy from the sun and convert it into useful work. But, to the couple’s disappointment the robot then starts throwing objects in all directions. They look more closely at the owner’s manual and realize they need to program the robot with instructions on how to perform the desired tasks to properly clean up the house.

In the same way, the simplest cell required machinery, such as some ancient equivalent to ATP synthase or chloroplasts, to process basic chemicals or sunlight. It also needed proteins with the proper information contained in their amino acid sequences to fold into other essential cellular structures, such as portals in the cell membrane. And, it needed proteins with the proper sequences to fold into enzymes to drive the metabolism. A key role of the enzymes is to  link reactions moving toward lower free energy (e.g. ATP → ADP + P) to reactions, such as combining amino acids into long chains, which go uphill. The energy from the former can then be used to drive the latter, since the net change in free energy is negative. The free-energy barrier is thus overcome.

However, the energy-processing machinery and information-rich proteins were still not enough. Proteins eventually break down, and they cannot self-replicate. Additional machinery was also needed to constantly produce new protein replacements. Also, the proteins’ sequence information had to have been stored in DNA using some  genetic code, where each amino acid was represented by a series of three nucleotides know as a codon in the same way English letters are represented in Morse Code by dots and dashes. However,  no identifiable physical connection exists between individual amino acids and their respective codons. In particular, no amino acid (e.g., valine) is much more strongly attracted to any particular codon (e.g., GTT) than to any other.  Without such a physical connection, no purely materialistic process could plausibly explain how amino acid sequences were encoded into DNA. Therefore, the same information in proteins and in DNA must have been encoded separately.

In addition, the information in  DNA is decoded back into proteins  through the use of ribosomes, tRNAs, and special enzymes called aminoacyl tRNA sythetases (aaRS). The aaRSs bind the correct amino acids to the correct tRNAs associated with the correct codons, so these enzymes contain the decoding key in their 3D structures. All life uses this same process, so the first cell almost certainly functioned similarly. However, no possible connection could exist between the encoding and the decoding processes, since the aaRSs’ structures are a result of their amino acid sequences, which happen to be part of the information encoded in the DNA. Therefore, the decoding had to have developed independently of the encoding, but they had to use the same code. And, they had to originate at the same time, since each is useless without the other.


All of these facts indicate that the code and the sequence information in proteins/DNA preexisted the original cell. And, the only place that they could exist outside of a physical medium is in a mind, which points to design.

Monday 19 June 2017

Actually,it is rocket science.

Rocket Science in a Microbe Saves the Planet
Evolution News & Views

Anammox. It's a good term to learn. Wikipedia's first paragraph stresses its importance:

Anammox, an abbreviation for ANaerobic AMMonium OXidation, is a globally important microbial process of the nitrogen cycle. The bacteria mediating this process were identified in 1999, and at the time were a great surprise for the scientific community. It takes place in many natural environments... [Emphasis added.]

And now, the news. A team of European scientists found something very interesting about the bacteria. Publishing in Nature, the researchers tell how they have ascertained the structure of a molecular machine that performs chemical wizardry using rocket science.

Anaerobic ammonium oxidation (anammox) has a major role in the Earth's nitrogen cycle and is used in energy-efficient wastewater treatment. This bacterial process combines nitrite and ammonium to form dinitrogen (N2) gas, and has been estimated to synthesize up to 50% of the dinitrogen gas emitted into our atmosphere from the oceans. Strikingly, the anammox process relies on the highly unusual, extremely reactive intermediate hydrazine, a compound also used as a rocket fuel because of its high reducing power. So far, the enzymatic mechanism by which hydrazine is synthesized is unknown. Here we report the 2.7 Å resolution crystal structure, as well as biophysical and spectroscopic studies, of a hydrazine synthase multiprotein complex isolated from the anammox organism Kuenenia stuttgartiensis. The structure shows an elongated dimer of heterotrimers, each of which has two unique c-type haem-containing active sites, as well as an interaction point for a redox partner. Furthermore, a system of tunnels connects these active sites. The crystal structure implies a two-step mechanism for hydrazine synthesis: a three-electron reduction of nitric oxide to hydroxylamine at the active site of the γ-subunit and its subsequent condensation with ammonia, yielding hydrazine in the active centre of the α-subunit. Our results provide the first, to our knowledge, detailed structural insight into the mechanism of biological hydrazine synthesis, which is of major significance for our understanding of the conversion of nitrogenous compounds in nature.

Dinitrogen gas (N2) is a tough nut to crack. The atoms pair up with a triple bond, very difficult for humans to break without a lot of heat and pressure. Fortunately, this makes it very inert for the atmosphere, but life needs to get at it to make amino acids, muscles, organs, and more. Nitrogenase enzymes in some microbes, such as soil bacteria, are able break apart the atoms at ambient temperatures (a secret agricultural chemists would love to learn). They then "fix" nitrogen into compounds such as ammonia (NH3) that can be utilized by plants and the animals that eat them. To have a nitrogen cycle, though, something has to return the N2 gas back to the atmosphere. That's the job of anammox bacteria.

Most nitrogen on earth occurs as gaseous N2 (nitrogen oxidation number 0). To make nitrogen available for biochemical reactions, the inert N2 has to be converted to ammonia (oxidation number −III), which can then be assimilated to produce organic nitrogen compounds, or be oxidized to nitrite (oxidation number +III) or nitrate (+V). The reduction of nitrite in turn results in the regeneration of N2, thus closing the biological nitrogen cycle.

Let's take a look at the enzyme that does this, the "hydrazine synthase multiprotein complex." Rocket fuel; imagine! No wonder the scientific community was surprised. The formula for hydrazine is N2H4. It's commonly used to power thrusters on spacecraft, such as the Cassini Saturn orbiter and the New Horizons probe that went by Pluto recently. Obviously, the anammox bacteria must handle this highly reactive compound with great care. Here's their overview of the reaction sequence. Notice how the bacterium gets some added benefit from its chemistry lab:

Our current understanding of the anammox reaction (equation (1)) is based on genomic, physiological and biochemical studies on the anammox bacterium K. stuttgartiensis. First, nitrite is reduced to nitric oxide (NO, equation (2)), which is then condensed with ammonium-derived ammonia (NH3) to yield hydrazine (N2H4, equation (3)). Hydrazine itself is a highly unusual metabolic intermediate, as it is extremely reactive and therefore toxic, and has a very low redox potential (E0′ = −750 mV). In the final step in the anammox process, it is oxidized to N2, yielding four electrons (equation (4)) that replenish those needed for nitrite reduction and hydrazine synthesis and are used to establish a proton-motive force across the membrane of the anammox organelle, the anammoxosome, driving ATP synthesis.

We've discussed ATP synthase before. It's that rotary engine in all life that runs on proton motive force. Here, we see that some of the protons needed for ATP synthesis come from the hydrazine reaction machine. Cool!

What does the anammox enzyme look like? They say it has tunnels between the active sites. The "hydrazine synthase" module is "biochemically unique." Don't look for a common ancestor, in other words. It's part of a "tightly coupled multicomponent system" they determined when they lysed a cell and watched its reactivity plummet. Sounds like an irreducibly complex system.

The paper's diagrams of hydrazine synthase (HZS) show multiple protein domains joined in a "crescent-shaped dimer of heterotrimers" labeled alpha, beta, and gamma, constituted in pairs. The machine also contains multiple haem units (like those in hemoglobin, but unique) and "one zinc ion, as well as several calcium ions." Good thing those atoms are available in Earth's crust.

Part of the machine looks like a six-bladed propeller. Another part has seven blades. How does it work? Everything is coordinated to carefully transfer electrons around. This means that charge distributions are highly controlled for redox (reduction-oxidation) reactions (i.e., those that receive or donate electrons). The choice of adverbs shows that their eyes were lighting up at their first view of this amazing machine. Note how emotion seasons the jargon:

Intriguingly, our crystal structure revealed a tunnel connecting the haem αI and γI sites (Fig. 3a). This tunnel branches off towards the surface of the protein approximately halfway between the haem sites, making them accessible to substrates from the solvent. Indeed, binding studies show that haem αI is accessible to xenon (Extended Data Fig. 4c). Interestingly, in-between the α- and γ-subunits, the tunnel is approached by a 15-amino-acid-long loop of the β-subunit (β245-260), placing the conserved βGlu253, which binds a magnesium ion, into the tunnel.

We would need to make another animation to show the machine in action, but here's a brief description of how it works. The two active sites, connected by a tunnel, appear to work in sequence. HZS gets electrons from cytochrome c, a well-known enzyme. The electrons enter the machine through one of the haem units, where a specifically-placed gamma unit adds protons. A "cluster of buried polar residues" transfers protons to the active center of the gamma subunit. A molecule named hydroxylamine (H3NO) diffuses into the active site, assisted by the beta subunit. It binds to another haem, which carefully positions it so that it is "bound in a tight, very hydrophobic pocket, so that there is little electrostatic shielding of the partial positive charge on the nitrogen." Ammonia then comes in to do a "nucleophilic attack" on the nitrogen of the molecule, yielding hydrazine. The hydrazine is then in position to escape via the tunnel branch leading to the surface. Once they determined this sequence, a light went on:

Interestingly, the proposed scheme is analogous to the Raschig process used in industrial hydrazine synthesis. There, ammonia is oxidized to chloramine (NH2Cl, nitrogen oxidation number −I, like in hydroxylamine), which then undergoes comproportionation with another molecule of ammonia to yield hydrazine.

(But that, we all know, is done by intelligent design.)


So here's something you can meditate on when you take in another breath. The nitrogen gas that comes into your lungs is a byproduct of an exquisitely designed, precision nanomachine that knows a lot about organic redox chemistry and safe handling of rocket fuel. This little machine, which also knows how to recycle and reuse all its parts in a sustainable "green" way, keeps the nitrogen in balance for the whole planet. Intriguing. Interesting. As Mr. Spock might say, fascinating.

Saturday 17 June 2017

Why the quest to reduce biology to chemistry is doomed.

The White Space in Evolutionary Thinking


Old CW chance and necessity did it/New CW gremlins did it

Evolution: The Fossils Speak, but Hardly with One Voice