Search This Blog

Sunday, 10 June 2018

From the ark?

New Paper in Evolution Journal: Humans and Animals Are (Mostly) the Same Age?
Andrew Jones

Could it be that animals were designed together with humans and instantiated at the same time too? Or did they get off the same spaceship? Or off the same boat?
An exciting new paper in the journal Human Evolution has been published which you can read here. Popular science reports such as this have incautiously claimed, “They found out that 9 out of 10 animal species on the planet came to being at the same time as humans did some 100,000 to 200,000 years ago.”

But to be more precise, what they actually found is that the most recent common ancestor of those species seems to have lived during that time period. 

This could indicate intelligent design, an event where species came into existence for the first time. But it could also indicate something else, such as a population crash (or crashes) that affected almost all life on Earth. Either way, if the paper is right, it would be a shock to established scientific expectations.

“This conclusion is very surprising,” co-author David Thaler of the University of Basel is quoted as saying, “and I fought against it as hard as I could.” His co-author is fellow geneticist Mark Stoeckle of Rockefeller University in New York.


Here is how the scientific reasoning works:

Nucleotide diversity π is the average number of differences per site between two aligned nucleotide sequences. The differences are assumed to be due to mutations accumulated on both sequences since they diverged. Therefore the nucleotide diversity π should be twice the mutation rate multiplied by the time to common ancestor of those sequences:  π = 2 μ T, where μ is the mutation rate per generation and T is the time since the common ancestor in generations.

If ordinary steady neutral evolution has been happening, then the time to common ancestor is expected to be about N, the effective population size. Therefore the nucleotide diversity is expected to be about 2 μ N. 

The mutation rate μ shows some variation, but N is believed to vary widely across the animal kingdom. Therefore the nucleotide diversity that we observe should vary widely too. 


But, according to these authors using data from the BOLD database, the nucleotide diversity does not vary greatly. Instead, these authors find that for 90% of all species, the observed levels of π suggest that T falls within the last 100-200kYr.

I am intrigued, but to be honest, I don’t quite know what to make of it just yet, and don’t want to jump to any conclusions. This kind of inference is complicated; the paper does not explain where they sourced estimates of mutation rate and effective population size. 


Moreover, studies of different kinds of sequences can seem to tell widely different stories. In an earlier paper from 2014,the authors point out that the idea of a single global population crash is, “almost a Noah’s Ark hypothesis,” though “This appears unlikely.” They speculate instead that “perhaps long-term climate cycles might cause widespread periodic bottlenecks.” 

In any case, one thing is clear: reconstructing the past is a complicated business and it is still full of surprises. There may be even bigger surprises in store.

Scientism's attempted shotgun wedding?

In Defense of Theistic Evolution, Denis Lamoureux Rewrites History
Jonathan Witt | @JonathanRWitt


The review article’s title, “Intelligent Design Theory: The God of the Gaps Rooted in Concordism,” deftly signals Lamoureux’s two-pronged strategy: First, paint intelligent design as a fallacious God-of-the-gaps argument (when in fact it’s an argument to the best explanation  based on what we know).

And second: Motive monger — in this case, by attributing the anthology’s conclusions to a religious motivation while giving short shrift to the book’s hundreds of pages of scientific evidence and argument.


Those criticisms of ID are low-hanging fruit for the writers at Evolution News, but here I want to focus on another problem with the review.

Scientism’s Grand Progress Narrative

At one point early on, Lamoureux confidently asserts the following:

First, according to a God-of-the-gaps approach to divine action, there are “gaps” in the continuum of natural processes, and these “discontinuities” in nature indicate places where God has miraculously intervened  in the world. …

If there are gaps in the continuum of natural processes, then science will identify them, and over time these gaps will “widen” with further research. That is, as scientists explore a true gap in nature where God has intervened, evidence will increase and demonstrate that there are no natural mechanisms to account for the origin or operation of a physical feature.

There is an indisputable pattern in the history of science. The God-of-the-gaps understanding of divine action has repeatedly failed. Instead of the gaps in nature getting wider with the advance of science, they have always been closed or filled by the ever-growing body of scientific information. In other words, history reveals that these purported gaps have always been gaps in knowledge  and not actual gaps in nature  indicative of the intervening hand of the Lord.

The lesser problem here is his tendentious use of the word “gaps.” The language suggests that it’s somehow a failure of God for the universe to be something less than a deist’s fantasy — a grand pool shot from the Big Bang without any need for subsequent creative involvement. That’s an aesthetic presupposition, and a manifestly suspect aesthetic presupposition.

That’s the lesser problem with the quote above, a problem  to delve into more fully at another time. Here I want to highlight the more glaring problem: Lamoureux’s assertion of “an indisputable pattern in the history of science.” The alleged historical pattern is manifestly untrue.

It was given formal structure by the 19th-century French philosopher August Comte, but in common parlance the claim runs something like this:

Humans used to attribute practically every mysterious force in nature to the doings of the gods. They stuffed a god into any and every gap in their knowledge of the natural world, shrugged, and moved on. Since then, the number of gaps has been shrinking without pause, filled with purely material explanations for everything from lightning bolts to romantic attraction. The moral of this grand story: always hold out for the purely material explanation, even when the evidence seems to point in the other direction. Materialism, in other words, is our manifest destiny; get used to it colonizing every cause in the cosmos.

This grand progress narrative is regularly employed with great confidence, but it’s contradicted by key developments in the physical and life sciences.

For example, through much of the 19th century, the scientific consensus was that microscopic life was relatively simple, little more than microscopic sacks of Jell-O. The scientific community also accepted the idea of spontaneous generation — that creatures sprang to life spontaneously out of things like dew and rotting meat. Taken together, these pieces of conventional scientific wisdom suggested that the origin of the first living cell deep in the past was hardly worthy of the term “mystery” — a material explanation seemed obvious.

But in 1861 Louis Pasteur conducted a series of experiments that discredited the notion of spontaneous generation. And in the next century, scientists began amassing evidence of just how complex even the simplest cell is. Today we know that cells are micro-miniaturized factories of astonishing sophistication and that, even more to the point, such sophistication is essential for them to be able to survive and reproduce.Origin-of-life researchers concede  that no adequate material explanation has been found for the origin of the cell.

So, we have come to learn that spontaneous generation was a fantasy. We have discovered that even the simplest cells are highly sophisticated and information-rich organisms. And the only cause we have ever witnessed actually producing novel information is intelligent design. Thus, modern scientific observations have collapsed a long-standing material explanation for the origin of life and simultaneously strengthened the competing design explanation. This development runs directly counter to scientism’s grand narrative.

A common rebuttal is that inferring design in such cases amounts to “giving up on science,” and that science should always hold out for a purely material explanation. But this is mere question begging. What if the first living cell really was the work of intelligent design? Being open to that possibility and following the evidence isn’t giving up on science but on scientism, a dogma resting on a progress narrative flatly contradicted by the historical record.

Evidence from Cosmology

Cosmology and physics provide another counter-example to the grand narrative Lamoureux asserts. In Darwin’s time, conventional scientific wisdom held that the universe was eternal. Given this, it was broadly assumed that there could hardly be any mystery about its origin: it simply had always existed. But developments in physics and astronomy have overturned the easy embrace of an eternal cosmos, and scientists are now in broad agreement that our universe had a beginning. What many thought had never happened and so required no explanation — the origin of the universe — suddenly cried out for an explanation.

Near the same time that scientists were realizing this, there was a growing awareness of what is now widely known in cosmology as the fine-tuning problem. This is the curious fact that the various laws and constants of nature appear finely calibrated to allow for life in the universe — calibrated to such a precise degree that even committed materialists have abandoned blunt appeals to chance.

To explain away this problem, the disciples of scientism have now resorted to saying there must be countless other universes, with our universe simply being one of the lucky ones with the right configuration to allow intelligent life to evolve.

Not every physicist has played along. Several, including some Nobel laureates, have assessed the growing body of evidence for fine-tuning and pointed to intelligent design as the most reasonable explanation. Physicist and Nobel laureate Charles Townes put it this way:

Intelligent design, as one sees it from a scientific point of view, seems to be quite real. This is a very special universe: it’s remarkable that it came out just this way. If the laws of physics weren’t just the way they are, we couldn’t be here at all. The sun couldn’t be there, the laws of gravity and nuclear laws and magnetic theory, quantum mechanics, and so on have to be just the way they are for us to be here.

Scientism’s grand progress narrative holds that as we learn more and more about the world, purely natural or material explanations inevitably will arise and grow stronger, while design arguments will inevitably collapse under the weight of new discoveries. But the opposite has happened in cosmology and origin-of-life studies.

Despite this, Lamoureux and other critics of intelligent design go right on recycling their grand narrative as if it were the whole truth and nothing but the truth. It is not. It ignores truths both historical and scientific.

The human body irreducibly complex and undeniably designed.

The Designed Body: Irreducible Complexity on Steroids = Exquisite Engineering - 
Steve Laufmann

Life thrives. It flourishes almost everywhere we look, even in remarkably inhospitable places. Perhaps because life is so common, it’s easy to lose sight of how tenuous it is. Life depends on a delicate balance of forces. Tip that balance and death is inevitable.

Howard Glicksman’s profound 81-part series, The Designed Body, concluded last September here at Evolution News. Dr. Glicksman offers uncommon insights into the inner workings of the human body (i.e., this thing I’m trapped inside of). As a hospice physician, he understands what it takes for a human body to survive, and how various dysfunctions can foul up the works and cause death. He makes these easy to understand, and offers important lessons for readers willing to work their way through the medical bits. I would like to add here my own reflections on the subject.

The series by Dr. Glicksman discusses 40 interrelated chemical and physiological parameters that the human body must carefully balance to sustain life. The body deploys amazing, interconnected solutions to manage them.

The parameters are: (1) oxygen, (2) carbon dioxide, (3) hydrogen ion, (4) water, (5) sodium, (6) potassium, (7) glucose, (8) calcium, (9) iron, (10) ammonia, (11) albumin transport, (12) proteins, (13) insulin, (14) glucagon, (15) thyroid hormone, (16) cortisol, (17) testosterone, (18) estrogen, (19) aldosterone, (20) parathormone, (21) digestive enzymes, (22) bile, (23) red blood cells, (24) white blood cells, (25) platelets, (26) clotting factors, (27) anti-clotting factors, (28) complement, (29) antibodies, (30) temperature, (31) heart rate, (32) respiratory rate, (33) blood pressure, (34) lung volume, (35) airway velocity, (36) cardiac output, (37) liver function, (38) kidney function, (39) hypothalamic function, (40) nerve impulse velocity.

I drew seven insights from the series.

Life can only exist when swimming upstream against uncharitable natural forces.

To survive, a human body must constantly struggle against powerful and unrelenting natural forces. When the body succumbs to any one of these forces, it reaches equilibrium with the environment — a condition commonly known as “death.”

A complex body plan places enormous demands on survival.

Single-celled organisms can only survive in a suitable substrate — where the organism (cell) is in direct contact with the environment, from which it must draw all the raw materials it needs to survive, and into which it can shunt its waste products without being poisoned by them.

In contrast, the vast majority of the cells in the human body are physically isolated from the environment, so survival depends on other means to deliver the needed raw materials and slough off any toxic waste materials for each one of its trillions of cells. Controlling so many factors is complicated work, and takes a lot of systems.

Goldilocks or death.

For each of these 40 chemical and physiological factors, the body must maintain its function within a narrow range of possible values. In effect, the body must do just the right things in just the right places at just the right times, in just the right quantities and at just the right speeds. Survival depends on maintaining balance within these tight tolerances.

This is an example of the Goldilocks Principle — everything must be just right for life to be possible. As Glicksman says, “Real numbers have real consequences.” When the numbers cannot be maintained at the right levels, the body dies.

As an example, let’s look at what’s needed for cellular respiration:

The cell is the basic building block of the human body. Each cell must successfully fight diffusion and osmosis in order to maintain its internal volume and required chemical content. This takes energy, which must come from somewhere.

To meet its energy needs, the cell breaks down glucose according to a simple chemical formula: C6H12O6 + 6O2 = 6CO2 + 6H2O. The glucose molecule and six oxygen molecules are converted into six molecules of carbon dioxide and six molecules of water. These are all stable molecules, so it takes some doing to make this work. In a complex 3-stage process, the cell uses 20+ specialized enzymes and carrier molecules (each made up of 300+ specifically-ordered amino acids), to break down the chemical bonds of the glucose molecule, thereby releasing energy which the cell uses to operate its machinery, including the critical sodium-potassium pumps that control the cell’s content and volume.

Obviously, a supply of oxygen is essential. But this presents a few problems for the body. While glucose can be stored in the body for later use, oxygen can’t, so it must be supplied continuously, and in the right quantities to meet current demand.

Without enough oxygen, the cell runs out of energy, its sodium-potassium pumps fail, the cell’s internal volume and chemical content can’t be maintained, and the cell dies. When sufficient cells within an organ die, the functions provided by that organ cease, causing downstream functions to fail, and so on. Without corrective action, this leads to a chain reaction of failure. In just a few minutes a lack of oxygen will kill the entire body.

On the other hand, when the body gets enough oxygen, the process generates carbon dioxide, which, if not removed, elevates the cell’s hydrogen ion level, which leads to cell death.

So the cell must efficiently “gate” oxygen into the cell and carbon dioxide out of the cell through the cell membrane. Given that the cell is surrounded by a few trillion other cells, each of which is independently maintaining the same cell content and volume functions, the body must manage overall substantive flows of oxygen (in) and carbon dioxide (out).

This requires an efficient transport subsystem (e.g., a circulatory system), complete with a pump (heart), transport medium (blood), and means to exchange oxygen and carbon dioxide with the air in the environment (lungs).

But this is not so easy. Blood’s fluid component is mainly water, and oxygen doesn’t dissolve well in water. So the body adds a complex iron-based protein called hemoglobin to the blood, which binds to the oxygen so it can be transported efficiently throughout the body. To make this work, though, the body needs still other (sub)systems to acquire, store, and process just enough iron (too much is toxic), and then process it into hemoglobin.

And there’s a separate process and subsystems to deliver glucose to the cells. Glicksman gives a lot more detail, but you get the idea: a lot of moving parts are required.

Survival depends on specialization, integration, and coordination.

Solving these problems in practice gets tricky.

To achieve the large variety of functions needed for survival, the body uses around 200 different, specialized types of cells. To achieve the requisite functions for each body subsystem, these cells must be arrayed in just the right locations with respect to their relevant subsystem(s).

Only when each subsystem is properly arrayed and functioning can the body survive. But solutions at the subsystem level tend to present new problems to overcome, and these typically rely on other autonomous subsystems, which are comprised of other specialized cells that are arranged in just the right ways to achieve their function. All of these must coordinate with each other.

In the example above, the circulatory subsystem transports raw materials to those trillions of individual cells. But inertia, friction, and gravity present challenges to circulation, so the system needs additional control mechanisms, involving cardiac output, blood pressure, and blood flow, to ensure that circulation is effective throughout the body.

A human body must operate effectively in at least three different levels: (1) the cells, (2) the subsystems, and (3) the whole body. The challenge to craft effective mechanisms across all three levels to address all 40 survival parameters is mind-boggling, and the body has somehow acquired ingenious solutions.

Every one of the body’s control systems is irreducibly complex.

For each of the 40 survival factors, the human body requires at least one control system. Every control system, whether in a biological or a human-engineered system, must include some means to perform each of the following functions:

Sensors, to measure that which is being controlled. There must be enough sensors, in the right locations (to sense that which is being controlled), and with suitable sensitivity to the needed tolerances.
Data integrators, to combine data from many sensors.
Control logic, to determine what adjustments are needed to achieve the desired effects. In some cases the logic may drive changes across multiple subsystems. In all cases, the logic must be correct to achieve proper function.
Effectors, to modify that which is being controlled.
Signaling infrastructure, to carry signals from the sensors to the data integrator(s) and/or controller, and from the controller to the effectors. Signals must carry the correct information, be directed to the right components, and arrive in a timely fashion.
Effectors must be capable of some or all of the following functions (depending on the factor being controlled):

Receptors, to receive signals regarding adjustments that must be made.
An organ, tissue, or other body subsystem capable of affecting the factor being controlled.
Harvesters, to obtain any needed chemicals from the environment — in the right amounts, at the right times — and convert them as needed for a particular use (eg, iron into hemoglobin).
Garbage collection, to expel unneeded chemical byproducts, which may be toxic in sufficient quantities.
Each control system must be dynamic enough to maintain the tight tolerances required in the timeframes needed. For example, it just wouldn’t do for the oxygen control system to take ten minutes to increase oxygen levels, if the body will die in four minutes without more oxygen.

Every one of the body’s control systems uses hundreds to millions of individual parts. This is irreducible complexity on steroids.

The body is a coherent mesh of interdependent systems.

None of the control systems Glicksman describes can achieve its functions alone — each relies on other body subsystems for help. To achieve this, the control mechanisms must work together toward an outcome that none can “see” or control end to end. Together, they form a mesh of interlocking control systems.

The human body is a coherent assembly of interdependent subsystems. Each subsystem is a coherent system in its own right, made up of an assembly of lower level components. Each lower level component is itself an assembly of even lower level components. We can follow this composition pattern of assembled components all the way down to proteins, amino acids, and the DNA code.

And, lest this be too easy, functional coherence requires process coherence across the body’s lifecycle, from fertilization to maturity and reproduction. Process coherence further constrains the body’s systems, and makes survival even more difficult.

Coherence requires all the right parts in all the right places doing all the rights things at all the right times in all the right quantities at all the right speeds — together, as a whole. This means the correct relative locations, sizes, shapes, orientations, capacities, and dynamics, with the correct fabrication specifications, assembly instructions, and operating processes. To coordinate its internal activities, the body integrates its parts and communicates using multiple types of signaling (eg, point-to-point, multi-point, broadcast). To maintain function, it uses still other mechanisms for error correction, failure prevention, threat detection, and defense, throughout its many levels of systems and subsystems.

The body’s parts are functionally interdependent, yet operationally autonomous. Aside from being extraordinarily hard to achieve with so many moving parts, this is what an engineer would call elegant design. The architecture of the human body is exquisite.

The whole is greater than the sum of the parts.

For the human body, though, the whole is much more than the sum of its parts. This is exactly what we see with all complex engineered systems. In fact, this is a defining characteristic of engineered systems.

With humans, the whole is also quite remarkable in its own right. It’s almost as if the body was designed specifically to enable the mind: thought, language, love, nobility, self-sacrifice, art, creativity, industry, and my favorite enigma (for Darwinists): music.

The human body enables these things, but does not determine them. As near as we can tell, no combination of the body’s substrate — information, machinery, or operations — alone can achieve these things.

Yet it’s exactly these things that make human life worth living. These are essential to our human experience. Human life involves so much more than merely being alive.

This simple observation flies in the face of Darwinian expectations. How can bottom-up, random processes possibly achieve such exquisitely engineered outcomes — outcomes that deliver a life experience well beyond the chemistry and physics of the body?

Such questions have enormous implications for worldviews, and for the ways that humans live their lives. I’ll look at some of those in a further post tomorrow.

RNA v. Darwinism's simple beginning

No Mere Bike Messenger, RNA Code Surpassing DNA in Complexity - 
Evolution News

The concept of a “DNA Code” has a long pedigree in genetics. But what about the other nucleic acids — the RNAs that use ribose instead of deoxyribose? Are they just simple conveyors of the library of genetic information in DNA, a humble bicycle messenger of the cell? Or do they have their own code? Last month, Nature published a Technology Feature by Kelly Rae Chi with an intriguing title, “The RNA code comes into focus.”

Chi begins with the m6am RNA modification we first mentioned in January, but doesn’t end there. Modifications to RNA bases are turning up all over, and their functions are just beginning to be understood. A feel for the importance of these new findings can be had by following the money:

In the past few years, He’s group has discovered evidence suggesting that RNA modifications provide a way to regulate transcripts involved in broad cellular roles, such as switching on cell-differentiation programs. Researchers need better technologies to explore these links; and, in October 2016, the US National Institutes of Health awarded He and Pan a 5-year, US$10.6-million grant to establish a centre to develop methods for identifying and mapping RNA modifications. [Emphasis added.]

On March 2, Japan’s RIKEN lab issued a news item stating, “Improved gene expression atlas shows that many human long non-coding RNAs may actually be functional.” RIKEN’s FANTOM Consortium is constructing a map of human non-coding RNAs. The latest findings calls to mind the surprises with DNA under ENCODE, but this time with RNA under FANTOM:

The atlas, which contains 27,919 long non-coding RNAs, summarizes for the first time their expression patterns across the major human cell types and tissues. By intersecting this atlas with genomic and genetic data, their results suggest that 19,175 of these RNAs may be functional, hinting that there could be as many — or even more — functional non-coding RNAs than the approximately 20,000 protein-coding genes in the human genome.

The atlas, published by Nature on March 9, expands into the RNA sphere from findings in the ENCODE and GENCODE databases. As with ENCODE, scientists so far are cataloging expression profiles without necessarily understanding actual functions. Presumably, though, cells have reasons for expressing these long non-coding RNAs (lncRNAs). The search for the actual functions is poised to bear fruit, as it did with ENCODE.

On the same day (March 9), Nature published another article finding “More uses for genomic junk.” Karen Adelman and Emily Egan point out that previous studies may have missed the functions of “junk DNA” by overlooking the key:

In addition to protein-coding messenger RNAs, our cells produce a plethora of diverse non-coding RNA molecules. Many of these are generated from sequences that are distant from genes, and include regulatory DNA sequences called enhancers. Transcription factors bound at enhancers are thought to regulate gene expression by looping towards genes in 3D space. The potential functions of non-coding enhancer RNAs (eRNAs) in this process have been avidly debated, but there has been a tendency to write them off as accidentally transcribed by-products of enhancer–gene interactions. After all, how could short, unstable, heterogeneous RNAs have a role in gene regulation? Writing in Cell, Bose et al. reveal that these eRNAs can indeed be functional, when produced in proximity to the enzyme CBP.

And what does the enzyme CBP do?

One transcriptional co-activator is the acetyltransferase enzyme CBP, which, along with its close relative p300, associates with DNA in enhancer regions, where it adds acetyl groups to histones and transcription factors. This acetylation promotes the recruitment of numerous transcriptional co-activators and chromatin-remodelling proteins that have acetyl-binding regions, along with the RNA-synthesizing enzyme polymerase II (Pol II).

In other words, CBP (a protein enzyme) and enhancer RNAs need to be together to work. The implication is clear; far from being accidental by-products, eRNAs are functional. They are involved in making genes accessible to the translation machinery, and regulating their expression. Transcription, long thought to be the engine, is just part of a much more complex factory.

A model is emerging in which transcription is itself an early step in enhancer activation. Pol II is recruited by transcription factors and maintains opens chromatin. Once the enzyme begins to transcribe, the nascent eRNA it produces stimulates co-activator proteins such as CBP in the region in a sequence- and stability-independent manner. The activities of these proteins promote the recruitment of more transcription factors, Pol II and chromatin-remodelling proteins, enabling full enhancer activation. In addition, Pol II itself can serve as a vehicle for attracting chromatin-modifying enzymes that spread more molecular marks associated with chromatin activation across the transcribed region. In this manner, transcription of enhancers can generate a positive-feedback loop that stabilizes both enhancer activity and gene-expression profiles.

Overall, the current study fundamentally changes the discourse around eRNA functions, by demonstrating that these RNAs can have major, locus-specific roles in enhancer activity that do not require a particular RNA-sequence context or abundance. Furthermore, by providing strong evidence that CBP interacts with eRNAs as they are being transcribed, this study highlights the value of investigating nascent RNAs for understanding enhancer activity.

Speaking of 3D space, researchers at the Max Delbrück Center for Molecular Medicine (MDC) have been producing a 3D map of the genome, underscoring the complex dance of DNA, RNA, and proteins:

Cells face a daunting task. They have to neatly pack a several meter-long thread of genetic material into a nucleus that measures only five micrometers across. This origami creates spatial interactions between genes and their switches, which can affect human health and disease. Now, an international team of scientists has devised a powerful new technique that ‘maps’ this three-dimensional geography of the entire genome. Their paper is published in Nature.

The paper explains the Genome Architecture Mapping (GAM) technique they created and how it elucidates the interactions between genes and their enhancers.

GAM also reveals an abundance of three-way contacts across the genome, especially between regions that are highly transcribed or contain super-enhancers, providing a level of insight into genome architecture that, owing to the technical limitations of current technologies, has previously remained unattainable. Furthermore, GAM highlights a role for gene-expression-specific contacts in organizing the genome in mammalian nuclei.

Isn’t that a worthy function? Keeping the genome organized is not a role that ‘genetic junk’ is likely to succeed at.

Another clue to function in RNA comes from a finding announced by Science Daily, “Start codons in DNA may be more numerous than previously thought.” When DNA needs to be translated into messenger RNA (mRNA), it was thought that a ‘start codon’ identified the start of the gene, and that there were only seven of these in the genetic code. But nobody had ever checked, this article says. Scientists from the National Institute of Standards and Technology found, to their surprise, that there are “at least 47 possible start codons, each of which can instruct a cell to begin protein synthesis.” Indeed, “It could be that all codons could be start codons.” The possibilities this opens up for expanding the complexity of RNA transcripts can only be imagined at this point.

We’ll end with one more example of the revolution in RNA functions. Scientists at Indiana University and colleagues found an example of “Hybrid incompatibility caused by an epiallele.” The open-access study, published in PNAS, “demonstrates a case of epigenetic gene silencing rather than pseudogene creation by mutation” in the lab plant Arabidopsis. Here’s a case where the RNA tail seems to wag the DNA dog:

Multicopy transgenes frequently become methylated and silenced, particularly when inserted into the genome as inverted repeats that can give rise to double-stranded RNAs. Such double-stranded RNAs can be diced into small interfering RNAs (siRNAs) that guide the cytosine methylation of homologous DNA sequences, a process known as RNA-directed DNA methylation (RdDM)…. This interesting case study has shown that naturally occurring RdDM, involving a new paralog that inactivates the ancestral paralog in trans, can be a cause of hybrid incompatibility.

Bypassing genetic mutations and natural selection, this “previously unrecognized epigenetic phenomenon” might help explain cases of apparently rapid speciation by a non-Darwinian process. We’ll leave that possibility for others to investigate.

In short, RNA has graduated from servant to master. The numerous RNA transcripts floating around in the nucleus, once thought to be genetic “noise,” may actually be the performance, like virtuosos in an orchestra bringing static notes written in DNA to life. This huge shift in thinking appears to be deeply problematic for neo-Darwinism. It sounds like a symphony of intelligent design.

- See more at: https://www.evolutionnews.org/2017/03/no-mere-bike-messenger-rna-code-surpassing-dna-in-complexity/#sthash.HkYGU29R.dpuf