Search This Blog

Saturday 16 June 2018

No mere machine?

Design in Living Things Goes Far Beyond Machines
Jonathan Wells

Seventeenth-century French philosopher RenĂ© Descartes conceived of living things as complex machines, a concept now known as the “machine metaphor.” In 1998, Bruce Alberts (who was then president of the U.S. National Academy of Sciences) wrote that “the entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of a set of large protein machines.”1

In Salvo 20, Casey Luskin wrote about how such machines pose a problem for unguided evolution and provide evidence for intelligent design (ID).2 Luskin focused on three molecular machines in particular: 

ATP synthase, which operates like a rotary engine, recharges molecules of adenosine triphosphate (ATP), which in turn provide energy for just about every function in a living cell. 
Kinesin, which runs along microscopic fibers called microtubules, transports cargoes throughout the cell.
The ribosome, which is a combination of proteins and RNAs, translates messenger RNA (which is transcribed from DNA) into proteins. 
These are only a few of the many hundreds of molecular machines that have been identified in living cells.


Luskin argued that complex molecular machines, which function only after all of their parts are in place, could not have been produced by unguided evolution but only by a goal-directed intelligence. In other words, molecular machines provide evidence for intelligent design. 

Sometimes the Metaphor Backfires

Charles Darwin called his theory of evolution “descent with modification,” and he insisted that the process was undirected. Some people have tried to use the machine metaphor to illustrate evolution, but their efforts have backfired. In 1990, biologist Tim Berra published a book titled Evolution and the Myth of Creationism that included photographs of some automobiles. Berra wrote, “if you compare a 1953 and a 1954 Corvette, side by side, then a 1954 and a 1955 model, and so on, the descent with modification is overwhelmingly obvious.”3 Since automobiles are engineered, however, the series of Corvettes actually illustrated design rather than undirected evolution. In 1997 Phillip E. Johnson, a critic of Darwinism and advocate of intelligent design, called this “Berra’s blunder.”4

In 2014, three engineers published an article in the Journal of Applied Physics comparing the evolution of airplanes to the evolution of animals. According to the authors, “Evolution means a flow organization (design) that changes over time,” and they argued that animals and “the human-and-machine species” (airplanes) “evolved in the same way.”5 But once again, the comparison of machines and living things implied design rather than undirected evolution.

According to pro-evolution philosophers Massimo Pigliucci and Maarten Boudry, the machine metaphor should be abandoned altogether. In 2010 they wrote: “Creationists and their modern heirs of the Intelligent Design movement have been eager to exploit mechanical metaphors for their own purposes.” So “if we want to keep Intelligent Design out of the classroom, not only do we have to exclude the ‘theory’ from the biology curriculum, but we also have to be weary [sic] of using scientific metaphors that bolster design-like misconceptions about living systems.” Pigliucci and Boudry concluded that since machine metaphors “have been grist to the mill of ID creationism, fostering design intuitions and other misconceptions about living systems, we think it is time to dispense with them altogether.”6

Organized from the Inside Out

But there are better reasons for us to be wary of the machine metaphor than wanting to keep intelligent design out of classroom. Eighteenth-century German philosopher Immanuel Kant pointed out that a machine is organized by an external agent from the outside in, while a living thing organizes itself from the inside out. Kant wrote that a living thing “is then not a mere machine, for that has merely moving power, but it possesses in itself formative power of a self-propagating kind which it communicates to its materials though they have it not of themselves; it organizes them.”7

According to philosopher of biology Daniel Nicholson, “despite some interesting similarities, organisms and machines are fundamentally different kinds of systems . . . the former are intrinsically purposive whereas the latter are extrinsically purposive.” Thus, the machine metaphor “fails to provide an appropriate theoretical understanding of what living systems are.”8

Biologist (and intelligent design advocate) Ann Gauger has written that “the machine metaphor fails,” in part, because living organisms are “causally circular beings.”9 Not only do new cells require existing cells, but also many biosynthetic pathways require the very molecule that is being synthesized. For example, the biosynthesis of the amino acid cysteine requires an enzyme that contains cysteine.10 Without cysteine, a cell cannot make cysteine. Similarly, ATP synthase consists of more than a half-dozen protein subunits, each of which requires ATP for its biosynthesis.11 In other words, ATP is needed to make the molecular motor that makes ATP.

So the machine metaphor is inadequate as a description of living organisms. Then what about the inference to design from molecular machines? The inference is still justified, because the machine metaphor is appropriate for isolated structures such as ATP synthase, kinesin, and the ribosome. Each of these consists of several parts that are precisely arranged by a cell to utilize energy to perform a specific function (which is how “machine” is usually defined). None of them can perform their functions if parts are missing or arranged incorrectly. They point to intelligent design just as much as machines made by humans.

Awe-Inspiring Design

An organism, however, in contrast to an isolated structure, rearranges its parts over time. An organism imposes organization on the materials it comprises, and its organization changes throughout its life cycle. 

To see how remarkable this is, imagine a machine familiar to most of us: a laptop computer. If a laptop computer were a plant or animal, it would start out as a protocomputer consisting of perhaps a few transistors, a little memory with some software, and a battery on a small circuit board. Then it would obtain materials from its surroundings to fabricate other components, and it would make its circuit board larger and more complex. Along the way, it would find ways to recharge its own battery. It would also write more programs. After reaching maturity, the laptop would run its programs by itself — imagine keys on the keyboard going up and down as though pressed by some unseen finger. If components were damaged, the computer could repair or replace them while continuing to operate. Eventually, the computer would fabricate one or more protocomputers, each capable of developing into other laptops just like it.

A lot of design goes into laptop computers. How much more design would have to go into making a laptop computer that could do all the things listed above? No one knows. But such a computer would certainly require more design, not less. And the design would be radically different from human design, because after the origin of the protocomputer the design it would be intrinsic rather than extrinsic. 

So the inference to design from molecular machines is robust, but it’s only the beginning. There is design in living things that far transcends the machine metaphor — and it should inspire awe.

Notes:
  1. Bruce Alberts, “The Cell as a Collection of Protein Machines: Preparing the Next Generation of Molecular Biologists,” Cell 92:291 (1998).
  2. Casey Luskin, “Biomechanics: Isn’t the Intricacy of Ubiquitous Molecular Machines Evidence for Design?” Salvo 20 (2012), 52–54.
  3. Tim Berra, Evolution and the Myth of Creationism (Stanford University Press, 1990), 117–119.
  4. Phillip E. Johnson, Defeating Darwinism by Opening Minds (Intervarsity Press, 1997), 62–63.
  5. Adrian Bejan et al., “The evolution of airplanes,” Journal of Applied Physics 116:044901  (2014).
  6. Massimo Pigliucci and Maarten Boudry, “Why Machine-Information Metaphors are Bad for Science and Science Education,” Science & Education (June 11, 2010).
  7. Immanuel Kant, The Critique of Judgement (Kritik der Urteilskraft), trans. J. H. Bernard (Macmillan, 1914), §65.
  8. Daniel Nicholson, “The Machine Conception of the Organism in Development and Evolution: A Critical Analysis,” Studies in History and Philosophy of Biological and Biomedical Sciences 48B (2014), 162–174.
  9. Ann Gauger, “Life, Purpose, Mind: Where the Machine Metaphor Fails,” Evolution News & Views (June 1, 2011).
  10. Ruma Banerjee et al., “Reaction mechanism and regulation of cystathionine beta-synthase,” Biochimica et Biophysica Acta 1647 (2003), 30–35. Alexander Schiffer et al., “Structure of the dissimilatory sulfite reductase from the hyperthermophilic archaeon Archaeoglobus fulgidus,” Journal of Molecular Biology 379 (2008), 1063–1074.
  11. Robert K. Nakamoto et al., “The Rotary Mechanism of the ATP Synthase,” Archives of Biochemistry and Biophysics 476 (2008), 43–50.

Yet more on education over indoctrination.

Yale President Calls for Objectivity in Science Education

A new article in Scientific American argues that We Should Teach All Students, in Every Discipline, to Think Like Scientists.”The author, Peter Salovey, is notable. He is President of Yale University where he also teaches psychology. He might not welcome my saying so, but his emphasis on thinking critically and examining evidence is spot-on. 

Salovey wants Superhero Science. The picture with the article is a graphic of a female scientist standing on top of a building with her coat flowing behind her like a cape. His hope comes through in his first sentence: “If knowledge is power, scientists should easily be able to influence the behavior of others and world events.” 

The emphasis on “power” and “influencing behavior” sounds like an invitation to scientism, or worse. This innovation, for one, could easily be abused in the service of political and other agendas:

Students at Yale, the California Institute of Technology and the University of Waterloo, for instance, developed an Internet browser plug-in that helps users distinguish bias in their news feeds.

Yet the article also calls for better science education and education in general. The language is excellent. Salovey notes: 

Educating global citizens is one of the most important charges to universities, and the best way we can transcend ideology is to teach our students, regardless of their majors, to think like scientists. From American history to urban studies, we have an obligation to challenge them to be inquisitive about the world, to weigh the quality and objectivity of data presented to them, and to change their minds when confronted with contrary evidence.

Likewise, STEM majors’ college experience must be integrated into a broader model of liberal education to prepare them to think critically and imaginatively about the world and to understand different viewpoints.

He concludes:

Knowledge is power but only if individuals are able to analyze and compare information against their personal beliefs, are willing to champion data-driven decision making over ideology, and have access to a wealth of research findings to inform policy discussions and decisions.

Yes! Students learning to “weigh the quality and objectivity of data presented to them, and to change their minds when confronted with contrary evidence” as well as to “think critically and imaginatively about the world and to understand different viewpoints” — what a wonderful vision! Sounds familiar, too.
If applied objectively, this approach would enhance evolution education along with all parts of the curriculum! What do you say, Dr. Salovey? 

Question all(less one?)

Inquiry-Based Science Education -- on Everything but Evolution:
Sarah Chaffee 

Unfortunately, it's typical for advocates of inquiry-based science teaching to apply their good ideas to everything but evolution. Case in point: here is physician Danielle Teller writing over at Quartz to suggest that teaching science as a collection of facts, rather than a process, has contributed to a lack of "science savvy." Yet in the same article she denounces the public's doubts on evolution -- as if Darwinian theory were one "fact" that they ought to have simply swallowed whole.

Dr. Teller laments that "[a]bout a third of Americans think there is no sound evidence for the existence of evolution." She wrongly lumps doubt over the scientific accuracy of evolution with vaccine and climate skepticism, and she is certainly mistaken in dismissing questions about Darwinian theory as a sign of scientific illiteracy. However, Teller is correct in describing science as a process of inquiry rather than the mere gathering of data.

She notes:

Most importantly, if we want future generations to be truly scientifically literate, we should teach our children that science is not a collection of immutable facts but a method for temporarily setting aside some of our ubiquitous human frailties, our biases and irrationality, our longing to confirm our most comforting beliefs, our mental laziness.

One reason science should be taught as a process, Teller says, is because facts change. Or rather, the body of facts we know expands. This is well recognized. Indeed, one of the precursors to modern national science standards, Project 2061, focused on scientific inquiry rather than facts. Developed in the 1980s, it aimed to prepare a scientifically knowledgeable population in time for the return of Halley's comet in 2061. The project's director, Jo Ellen Roseman, told Ars Technica: "While we had no idea what the world would be like, we could guarantee that it would be shaped by science, mathematics, and technology."

But macroevolutionary theory has run into trouble precisely because it is a 19th-century idea in a world where the body of known facts has broadened dramatically. Consider Darwin's explanation of the evolution of the eye. He thought that one could go from a light-sensitive spot to our complex human eyes. But that was understandable. In his era, no one recognized the eye's intricacies as we do now. Or take the case of so-called vestigial organs. To cite two examples, much of what we thought we knew about the appendix and tonsils as "vestigial" by-products of evolution has been falsified as scientists find important immune and other functions.

Yet neo-Darwinism's defenders hang on because humans have a tendency to reject ideas that challenge preconceived notions. As Thomas Kuhn wrote in The Structure of Scientific Revolutions, when faced with an anomaly, a theory's defenders "will devise numerous articulations and ad hoc modifications of their theory in order to eliminate any apparent conflict."

The result in the context of origins science is a sort of "fundamentalist" evolutionary thinking that rejects counterevidence and dismisses any suggestion that evolution might have weaknesses. Teller writes of her own experience in learning to follow the evidence instead of what she had been taught, noting, "my own personal coda is that I never rejected out-of-hand a theory that challenged my preconceived notions again." Why the exception, then, for neo-Darwinism?

Science, finally, should be taught as a process because the interpretation of data requires critical thinking. Science education theorists agree. In a joint issue on the theme of reform in STEM (Science, Technology, Engineering, Math) education, Nature and Scientific American noted, "[S]tudents gain a much deeper understanding of science when they actively grapple with questions than when they passively listen to answers."

Nevertheless, in public school science classrooms, evolution is often presented in a one-sided, dogmatic manner. At Discovery Institute, we support education that promotes critical thinking by teaching the evolution controversy. What would that look like in the classroom? Teachers could engage their biology students on such questions as: Do Galápagos finches provide evidence for macro-, or only micro-evolution? Are vertebrate embryos really similar in their earliest stages? How likely is it that the Miller-Urey experiment represents conditions present on early earth? Such analysis makes students confront the data before them and look at it from multiple angles, considering a variety of possible interpretations. That, after all, is what the scientific method is all about.

In fact, you don't have to be a student, a teacher, or a scientist to engage in scientific inquiry. Thinking critically is for adults, all adults, too. Teller notes:

It's not possible for everyone -- or anyone -- to be sufficiently well trained in science to analyze data from multiple fields and come up with sound, independent interpretations. I spent decades in medical research, but I will never understand particle physics, and I've forgotten almost everything I ever learned about inorganic chemistry. It is possible, however, to learn enough about the powers and limitations of the scientific method to intelligently determine which claims made by scientists are likely to be true and which deserve skepticism. [Emphasis added.]

Yes, exactly! It is possible. Although Teller goes on to echo the media's normal neo-Darwinian rhetoric -- extol inquiry, but affirm the validity of dogmatic belief in evolution -- what she says there could not be more right.

Sunday 10 June 2018

From the ark?

New Paper in Evolution Journal: Humans and Animals Are (Mostly) the Same Age?
Andrew Jones

Could it be that animals were designed together with humans and instantiated at the same time too? Or did they get off the same spaceship? Or off the same boat?
An exciting new paper in the journal Human Evolution has been published which you can read here. Popular science reports such as this have incautiously claimed, “They found out that 9 out of 10 animal species on the planet came to being at the same time as humans did some 100,000 to 200,000 years ago.”

But to be more precise, what they actually found is that the most recent common ancestor of those species seems to have lived during that time period. 

This could indicate intelligent design, an event where species came into existence for the first time. But it could also indicate something else, such as a population crash (or crashes) that affected almost all life on Earth. Either way, if the paper is right, it would be a shock to established scientific expectations.

“This conclusion is very surprising,” co-author David Thaler of the University of Basel is quoted as saying, “and I fought against it as hard as I could.” His co-author is fellow geneticist Mark Stoeckle of Rockefeller University in New York.


Here is how the scientific reasoning works:

Nucleotide diversity Ď€ is the average number of differences per site between two aligned nucleotide sequences. The differences are assumed to be due to mutations accumulated on both sequences since they diverged. Therefore the nucleotide diversity Ď€ should be twice the mutation rate multiplied by the time to common ancestor of those sequences:  Ď€ = 2 ÎĽ T, where ÎĽ is the mutation rate per generation and T is the time since the common ancestor in generations.

If ordinary steady neutral evolution has been happening, then the time to common ancestor is expected to be about N, the effective population size. Therefore the nucleotide diversity is expected to be about 2 ÎĽ N. 

The mutation rate ÎĽ shows some variation, but N is believed to vary widely across the animal kingdom. Therefore the nucleotide diversity that we observe should vary widely too. 


But, according to these authors using data from the BOLD database, the nucleotide diversity does not vary greatly. Instead, these authors find that for 90% of all species, the observed levels of π suggest that T falls within the last 100-200kYr.

I am intrigued, but to be honest, I don’t quite know what to make of it just yet, and don’t want to jump to any conclusions. This kind of inference is complicated; the paper does not explain where they sourced estimates of mutation rate and effective population size. 


Moreover, studies of different kinds of sequences can seem to tell widely different stories. In an earlier paper from 2014,the authors point out that the idea of a single global population crash is, “almost a Noah’s Ark hypothesis,” though “This appears unlikely.” They speculate instead that “perhaps long-term climate cycles might cause widespread periodic bottlenecks.” 

In any case, one thing is clear: reconstructing the past is a complicated business and it is still full of surprises. There may be even bigger surprises in store.

Scientism's attempted shotgun wedding?

In Defense of Theistic Evolution, Denis Lamoureux Rewrites History
Jonathan Witt | @JonathanRWitt


The review article’s title, “Intelligent Design Theory: The God of the Gaps Rooted in Concordism,” deftly signals Lamoureux’s two-pronged strategy: First, paint intelligent design as a fallacious God-of-the-gaps argument (when in fact it’s an argument to the best explanation  based on what we know).

And second: Motive monger — in this case, by attributing the anthology’s conclusions to a religious motivation while giving short shrift to the book’s hundreds of pages of scientific evidence and argument.


Those criticisms of ID are low-hanging fruit for the writers at Evolution News, but here I want to focus on another problem with the review.

Scientism’s Grand Progress Narrative

At one point early on, Lamoureux confidently asserts the following:

First, according to a God-of-the-gaps approach to divine action, there are “gaps” in the continuum of natural processes, and these “discontinuities” in nature indicate places where God has miraculously intervened  in the world. …

If there are gaps in the continuum of natural processes, then science will identify them, and over time these gaps will “widen” with further research. That is, as scientists explore a true gap in nature where God has intervened, evidence will increase and demonstrate that there are no natural mechanisms to account for the origin or operation of a physical feature.

There is an indisputable pattern in the history of science. The God-of-the-gaps understanding of divine action has repeatedly failed. Instead of the gaps in nature getting wider with the advance of science, they have always been closed or filled by the ever-growing body of scientific information. In other words, history reveals that these purported gaps have always been gaps in knowledge  and not actual gaps in nature  indicative of the intervening hand of the Lord.

The lesser problem here is his tendentious use of the word “gaps.” The language suggests that it’s somehow a failure of God for the universe to be something less than a deist’s fantasy — a grand pool shot from the Big Bang without any need for subsequent creative involvement. That’s an aesthetic presupposition, and a manifestly suspect aesthetic presupposition.

That’s the lesser problem with the quote above, a problem  to delve into more fully at another time. Here I want to highlight the more glaring problem: Lamoureux’s assertion of “an indisputable pattern in the history of science.” The alleged historical pattern is manifestly untrue.

It was given formal structure by the 19th-century French philosopher August Comte, but in common parlance the claim runs something like this:

Humans used to attribute practically every mysterious force in nature to the doings of the gods. They stuffed a god into any and every gap in their knowledge of the natural world, shrugged, and moved on. Since then, the number of gaps has been shrinking without pause, filled with purely material explanations for everything from lightning bolts to romantic attraction. The moral of this grand story: always hold out for the purely material explanation, even when the evidence seems to point in the other direction. Materialism, in other words, is our manifest destiny; get used to it colonizing every cause in the cosmos.

This grand progress narrative is regularly employed with great confidence, but it’s contradicted by key developments in the physical and life sciences.

For example, through much of the 19th century, the scientific consensus was that microscopic life was relatively simple, little more than microscopic sacks of Jell-O. The scientific community also accepted the idea of spontaneous generation — that creatures sprang to life spontaneously out of things like dew and rotting meat. Taken together, these pieces of conventional scientific wisdom suggested that the origin of the first living cell deep in the past was hardly worthy of the term “mystery” — a material explanation seemed obvious.

But in 1861 Louis Pasteur conducted a series of experiments that discredited the notion of spontaneous generation. And in the next century, scientists began amassing evidence of just how complex even the simplest cell is. Today we know that cells are micro-miniaturized factories of astonishing sophistication and that, even more to the point, such sophistication is essential for them to be able to survive and reproduce.Origin-of-life researchers concede  that no adequate material explanation has been found for the origin of the cell.

So, we have come to learn that spontaneous generation was a fantasy. We have discovered that even the simplest cells are highly sophisticated and information-rich organisms. And the only cause we have ever witnessed actually producing novel information is intelligent design. Thus, modern scientific observations have collapsed a long-standing material explanation for the origin of life and simultaneously strengthened the competing design explanation. This development runs directly counter to scientism’s grand narrative.

A common rebuttal is that inferring design in such cases amounts to “giving up on science,” and that science should always hold out for a purely material explanation. But this is mere question begging. What if the first living cell really was the work of intelligent design? Being open to that possibility and following the evidence isn’t giving up on science but on scientism, a dogma resting on a progress narrative flatly contradicted by the historical record.

Evidence from Cosmology

Cosmology and physics provide another counter-example to the grand narrative Lamoureux asserts. In Darwin’s time, conventional scientific wisdom held that the universe was eternal. Given this, it was broadly assumed that there could hardly be any mystery about its origin: it simply had always existed. But developments in physics and astronomy have overturned the easy embrace of an eternal cosmos, and scientists are now in broad agreement that our universe had a beginning. What many thought had never happened and so required no explanation — the origin of the universe — suddenly cried out for an explanation.

Near the same time that scientists were realizing this, there was a growing awareness of what is now widely known in cosmology as the fine-tuning problem. This is the curious fact that the various laws and constants of nature appear finely calibrated to allow for life in the universe — calibrated to such a precise degree that even committed materialists have abandoned blunt appeals to chance.

To explain away this problem, the disciples of scientism have now resorted to saying there must be countless other universes, with our universe simply being one of the lucky ones with the right configuration to allow intelligent life to evolve.

Not every physicist has played along. Several, including some Nobel laureates, have assessed the growing body of evidence for fine-tuning and pointed to intelligent design as the most reasonable explanation. Physicist and Nobel laureate Charles Townes put it this way:

Intelligent design, as one sees it from a scientific point of view, seems to be quite real. This is a very special universe: it’s remarkable that it came out just this way. If the laws of physics weren’t just the way they are, we couldn’t be here at all. The sun couldn’t be there, the laws of gravity and nuclear laws and magnetic theory, quantum mechanics, and so on have to be just the way they are for us to be here.

Scientism’s grand progress narrative holds that as we learn more and more about the world, purely natural or material explanations inevitably will arise and grow stronger, while design arguments will inevitably collapse under the weight of new discoveries. But the opposite has happened in cosmology and origin-of-life studies.

Despite this, Lamoureux and other critics of intelligent design go right on recycling their grand narrative as if it were the whole truth and nothing but the truth. It is not. It ignores truths both historical and scientific.

The human body irreducibly complex and undeniably designed.

The Designed Body: Irreducible Complexity on Steroids = Exquisite Engineering - 
Steve Laufmann

Life thrives. It flourishes almost everywhere we look, even in remarkably inhospitable places. Perhaps because life is so common, it’s easy to lose sight of how tenuous it is. Life depends on a delicate balance of forces. Tip that balance and death is inevitable.

Howard Glicksman’s profound 81-part series, The Designed Body, concluded last September here at Evolution News. Dr. Glicksman offers uncommon insights into the inner workings of the human body (i.e., this thing I’m trapped inside of). As a hospice physician, he understands what it takes for a human body to survive, and how various dysfunctions can foul up the works and cause death. He makes these easy to understand, and offers important lessons for readers willing to work their way through the medical bits. I would like to add here my own reflections on the subject.

The series by Dr. Glicksman discusses 40 interrelated chemical and physiological parameters that the human body must carefully balance to sustain life. The body deploys amazing, interconnected solutions to manage them.

The parameters are: (1) oxygen, (2) carbon dioxide, (3) hydrogen ion, (4) water, (5) sodium, (6) potassium, (7) glucose, (8) calcium, (9) iron, (10) ammonia, (11) albumin transport, (12) proteins, (13) insulin, (14) glucagon, (15) thyroid hormone, (16) cortisol, (17) testosterone, (18) estrogen, (19) aldosterone, (20) parathormone, (21) digestive enzymes, (22) bile, (23) red blood cells, (24) white blood cells, (25) platelets, (26) clotting factors, (27) anti-clotting factors, (28) complement, (29) antibodies, (30) temperature, (31) heart rate, (32) respiratory rate, (33) blood pressure, (34) lung volume, (35) airway velocity, (36) cardiac output, (37) liver function, (38) kidney function, (39) hypothalamic function, (40) nerve impulse velocity.

I drew seven insights from the series.

Life can only exist when swimming upstream against uncharitable natural forces.

To survive, a human body must constantly struggle against powerful and unrelenting natural forces. When the body succumbs to any one of these forces, it reaches equilibrium with the environment — a condition commonly known as “death.”

A complex body plan places enormous demands on survival.

Single-celled organisms can only survive in a suitable substrate — where the organism (cell) is in direct contact with the environment, from which it must draw all the raw materials it needs to survive, and into which it can shunt its waste products without being poisoned by them.

In contrast, the vast majority of the cells in the human body are physically isolated from the environment, so survival depends on other means to deliver the needed raw materials and slough off any toxic waste materials for each one of its trillions of cells. Controlling so many factors is complicated work, and takes a lot of systems.

Goldilocks or death.

For each of these 40 chemical and physiological factors, the body must maintain its function within a narrow range of possible values. In effect, the body must do just the right things in just the right places at just the right times, in just the right quantities and at just the right speeds. Survival depends on maintaining balance within these tight tolerances.

This is an example of the Goldilocks Principle — everything must be just right for life to be possible. As Glicksman says, “Real numbers have real consequences.” When the numbers cannot be maintained at the right levels, the body dies.

As an example, let’s look at what’s needed for cellular respiration:

The cell is the basic building block of the human body. Each cell must successfully fight diffusion and osmosis in order to maintain its internal volume and required chemical content. This takes energy, which must come from somewhere.

To meet its energy needs, the cell breaks down glucose according to a simple chemical formula: C6H12O6 + 6O2 = 6CO2 + 6H2O. The glucose molecule and six oxygen molecules are converted into six molecules of carbon dioxide and six molecules of water. These are all stable molecules, so it takes some doing to make this work. In a complex 3-stage process, the cell uses 20+ specialized enzymes and carrier molecules (each made up of 300+ specifically-ordered amino acids), to break down the chemical bonds of the glucose molecule, thereby releasing energy which the cell uses to operate its machinery, including the critical sodium-potassium pumps that control the cell’s content and volume.

Obviously, a supply of oxygen is essential. But this presents a few problems for the body. While glucose can be stored in the body for later use, oxygen can’t, so it must be supplied continuously, and in the right quantities to meet current demand.

Without enough oxygen, the cell runs out of energy, its sodium-potassium pumps fail, the cell’s internal volume and chemical content can’t be maintained, and the cell dies. When sufficient cells within an organ die, the functions provided by that organ cease, causing downstream functions to fail, and so on. Without corrective action, this leads to a chain reaction of failure. In just a few minutes a lack of oxygen will kill the entire body.

On the other hand, when the body gets enough oxygen, the process generates carbon dioxide, which, if not removed, elevates the cell’s hydrogen ion level, which leads to cell death.

So the cell must efficiently “gate” oxygen into the cell and carbon dioxide out of the cell through the cell membrane. Given that the cell is surrounded by a few trillion other cells, each of which is independently maintaining the same cell content and volume functions, the body must manage overall substantive flows of oxygen (in) and carbon dioxide (out).

This requires an efficient transport subsystem (e.g., a circulatory system), complete with a pump (heart), transport medium (blood), and means to exchange oxygen and carbon dioxide with the air in the environment (lungs).

But this is not so easy. Blood’s fluid component is mainly water, and oxygen doesn’t dissolve well in water. So the body adds a complex iron-based protein called hemoglobin to the blood, which binds to the oxygen so it can be transported efficiently throughout the body. To make this work, though, the body needs still other (sub)systems to acquire, store, and process just enough iron (too much is toxic), and then process it into hemoglobin.

And there’s a separate process and subsystems to deliver glucose to the cells. Glicksman gives a lot more detail, but you get the idea: a lot of moving parts are required.

Survival depends on specialization, integration, and coordination.

Solving these problems in practice gets tricky.

To achieve the large variety of functions needed for survival, the body uses around 200 different, specialized types of cells. To achieve the requisite functions for each body subsystem, these cells must be arrayed in just the right locations with respect to their relevant subsystem(s).

Only when each subsystem is properly arrayed and functioning can the body survive. But solutions at the subsystem level tend to present new problems to overcome, and these typically rely on other autonomous subsystems, which are comprised of other specialized cells that are arranged in just the right ways to achieve their function. All of these must coordinate with each other.

In the example above, the circulatory subsystem transports raw materials to those trillions of individual cells. But inertia, friction, and gravity present challenges to circulation, so the system needs additional control mechanisms, involving cardiac output, blood pressure, and blood flow, to ensure that circulation is effective throughout the body.

A human body must operate effectively in at least three different levels: (1) the cells, (2) the subsystems, and (3) the whole body. The challenge to craft effective mechanisms across all three levels to address all 40 survival parameters is mind-boggling, and the body has somehow acquired ingenious solutions.

Every one of the body’s control systems is irreducibly complex.

For each of the 40 survival factors, the human body requires at least one control system. Every control system, whether in a biological or a human-engineered system, must include some means to perform each of the following functions:

Sensors, to measure that which is being controlled. There must be enough sensors, in the right locations (to sense that which is being controlled), and with suitable sensitivity to the needed tolerances.
Data integrators, to combine data from many sensors.
Control logic, to determine what adjustments are needed to achieve the desired effects. In some cases the logic may drive changes across multiple subsystems. In all cases, the logic must be correct to achieve proper function.
Effectors, to modify that which is being controlled.
Signaling infrastructure, to carry signals from the sensors to the data integrator(s) and/or controller, and from the controller to the effectors. Signals must carry the correct information, be directed to the right components, and arrive in a timely fashion.
Effectors must be capable of some or all of the following functions (depending on the factor being controlled):

Receptors, to receive signals regarding adjustments that must be made.
An organ, tissue, or other body subsystem capable of affecting the factor being controlled.
Harvesters, to obtain any needed chemicals from the environment — in the right amounts, at the right times — and convert them as needed for a particular use (eg, iron into hemoglobin).
Garbage collection, to expel unneeded chemical byproducts, which may be toxic in sufficient quantities.
Each control system must be dynamic enough to maintain the tight tolerances required in the timeframes needed. For example, it just wouldn’t do for the oxygen control system to take ten minutes to increase oxygen levels, if the body will die in four minutes without more oxygen.

Every one of the body’s control systems uses hundreds to millions of individual parts. This is irreducible complexity on steroids.

The body is a coherent mesh of interdependent systems.

None of the control systems Glicksman describes can achieve its functions alone — each relies on other body subsystems for help. To achieve this, the control mechanisms must work together toward an outcome that none can “see” or control end to end. Together, they form a mesh of interlocking control systems.

The human body is a coherent assembly of interdependent subsystems. Each subsystem is a coherent system in its own right, made up of an assembly of lower level components. Each lower level component is itself an assembly of even lower level components. We can follow this composition pattern of assembled components all the way down to proteins, amino acids, and the DNA code.

And, lest this be too easy, functional coherence requires process coherence across the body’s lifecycle, from fertilization to maturity and reproduction. Process coherence further constrains the body’s systems, and makes survival even more difficult.

Coherence requires all the right parts in all the right places doing all the rights things at all the right times in all the right quantities at all the right speeds — together, as a whole. This means the correct relative locations, sizes, shapes, orientations, capacities, and dynamics, with the correct fabrication specifications, assembly instructions, and operating processes. To coordinate its internal activities, the body integrates its parts and communicates using multiple types of signaling (eg, point-to-point, multi-point, broadcast). To maintain function, it uses still other mechanisms for error correction, failure prevention, threat detection, and defense, throughout its many levels of systems and subsystems.

The body’s parts are functionally interdependent, yet operationally autonomous. Aside from being extraordinarily hard to achieve with so many moving parts, this is what an engineer would call elegant design. The architecture of the human body is exquisite.

The whole is greater than the sum of the parts.

For the human body, though, the whole is much more than the sum of its parts. This is exactly what we see with all complex engineered systems. In fact, this is a defining characteristic of engineered systems.

With humans, the whole is also quite remarkable in its own right. It’s almost as if the body was designed specifically to enable the mind: thought, language, love, nobility, self-sacrifice, art, creativity, industry, and my favorite enigma (for Darwinists): music.

The human body enables these things, but does not determine them. As near as we can tell, no combination of the body’s substrate — information, machinery, or operations — alone can achieve these things.

Yet it’s exactly these things that make human life worth living. These are essential to our human experience. Human life involves so much more than merely being alive.

This simple observation flies in the face of Darwinian expectations. How can bottom-up, random processes possibly achieve such exquisitely engineered outcomes — outcomes that deliver a life experience well beyond the chemistry and physics of the body?

Such questions have enormous implications for worldviews, and for the ways that humans live their lives. I’ll look at some of those in a further post tomorrow.

RNA v. Darwinism's simple beginning

No Mere Bike Messenger, RNA Code Surpassing DNA in Complexity - 
Evolution News

The concept of a “DNA Code” has a long pedigree in genetics. But what about the other nucleic acids — the RNAs that use ribose instead of deoxyribose? Are they just simple conveyors of the library of genetic information in DNA, a humble bicycle messenger of the cell? Or do they have their own code? Last month, Nature published a Technology Feature by Kelly Rae Chi with an intriguing title, “The RNA code comes into focus.”

Chi begins with the m6am RNA modification we first mentioned in January, but doesn’t end there. Modifications to RNA bases are turning up all over, and their functions are just beginning to be understood. A feel for the importance of these new findings can be had by following the money:

In the past few years, He’s group has discovered evidence suggesting that RNA modifications provide a way to regulate transcripts involved in broad cellular roles, such as switching on cell-differentiation programs. Researchers need better technologies to explore these links; and, in October 2016, the US National Institutes of Health awarded He and Pan a 5-year, US$10.6-million grant to establish a centre to develop methods for identifying and mapping RNA modifications. [Emphasis added.]

On March 2, Japan’s RIKEN lab issued a news item stating, “Improved gene expression atlas shows that many human long non-coding RNAs may actually be functional.” RIKEN’s FANTOM Consortium is constructing a map of human non-coding RNAs. The latest findings calls to mind the surprises with DNA under ENCODE, but this time with RNA under FANTOM:

The atlas, which contains 27,919 long non-coding RNAs, summarizes for the first time their expression patterns across the major human cell types and tissues. By intersecting this atlas with genomic and genetic data, their results suggest that 19,175 of these RNAs may be functional, hinting that there could be as many — or even more — functional non-coding RNAs than the approximately 20,000 protein-coding genes in the human genome.

The atlas, published by Nature on March 9, expands into the RNA sphere from findings in the ENCODE and GENCODE databases. As with ENCODE, scientists so far are cataloging expression profiles without necessarily understanding actual functions. Presumably, though, cells have reasons for expressing these long non-coding RNAs (lncRNAs). The search for the actual functions is poised to bear fruit, as it did with ENCODE.

On the same day (March 9), Nature published another article finding “More uses for genomic junk.” Karen Adelman and Emily Egan point out that previous studies may have missed the functions of “junk DNA” by overlooking the key:

In addition to protein-coding messenger RNAs, our cells produce a plethora of diverse non-coding RNA molecules. Many of these are generated from sequences that are distant from genes, and include regulatory DNA sequences called enhancers. Transcription factors bound at enhancers are thought to regulate gene expression by looping towards genes in 3D space. The potential functions of non-coding enhancer RNAs (eRNAs) in this process have been avidly debated, but there has been a tendency to write them off as accidentally transcribed by-products of enhancer–gene interactions. After all, how could short, unstable, heterogeneous RNAs have a role in gene regulation? Writing in Cell, Bose et al. reveal that these eRNAs can indeed be functional, when produced in proximity to the enzyme CBP.

And what does the enzyme CBP do?

One transcriptional co-activator is the acetyltransferase enzyme CBP, which, along with its close relative p300, associates with DNA in enhancer regions, where it adds acetyl groups to histones and transcription factors. This acetylation promotes the recruitment of numerous transcriptional co-activators and chromatin-remodelling proteins that have acetyl-binding regions, along with the RNA-synthesizing enzyme polymerase II (Pol II).

In other words, CBP (a protein enzyme) and enhancer RNAs need to be together to work. The implication is clear; far from being accidental by-products, eRNAs are functional. They are involved in making genes accessible to the translation machinery, and regulating their expression. Transcription, long thought to be the engine, is just part of a much more complex factory.

A model is emerging in which transcription is itself an early step in enhancer activation. Pol II is recruited by transcription factors and maintains opens chromatin. Once the enzyme begins to transcribe, the nascent eRNA it produces stimulates co-activator proteins such as CBP in the region in a sequence- and stability-independent manner. The activities of these proteins promote the recruitment of more transcription factors, Pol II and chromatin-remodelling proteins, enabling full enhancer activation. In addition, Pol II itself can serve as a vehicle for attracting chromatin-modifying enzymes that spread more molecular marks associated with chromatin activation across the transcribed region. In this manner, transcription of enhancers can generate a positive-feedback loop that stabilizes both enhancer activity and gene-expression profiles.

Overall, the current study fundamentally changes the discourse around eRNA functions, by demonstrating that these RNAs can have major, locus-specific roles in enhancer activity that do not require a particular RNA-sequence context or abundance. Furthermore, by providing strong evidence that CBP interacts with eRNAs as they are being transcribed, this study highlights the value of investigating nascent RNAs for understanding enhancer activity.

Speaking of 3D space, researchers at the Max DelbrĂĽck Center for Molecular Medicine (MDC) have been producing a 3D map of the genome, underscoring the complex dance of DNA, RNA, and proteins:

Cells face a daunting task. They have to neatly pack a several meter-long thread of genetic material into a nucleus that measures only five micrometers across. This origami creates spatial interactions between genes and their switches, which can affect human health and disease. Now, an international team of scientists has devised a powerful new technique that ‘maps’ this three-dimensional geography of the entire genome. Their paper is published in Nature.

The paper explains the Genome Architecture Mapping (GAM) technique they created and how it elucidates the interactions between genes and their enhancers.

GAM also reveals an abundance of three-way contacts across the genome, especially between regions that are highly transcribed or contain super-enhancers, providing a level of insight into genome architecture that, owing to the technical limitations of current technologies, has previously remained unattainable. Furthermore, GAM highlights a role for gene-expression-specific contacts in organizing the genome in mammalian nuclei.

Isn’t that a worthy function? Keeping the genome organized is not a role that ‘genetic junk’ is likely to succeed at.

Another clue to function in RNA comes from a finding announced by Science Daily, “Start codons in DNA may be more numerous than previously thought.” When DNA needs to be translated into messenger RNA (mRNA), it was thought that a ‘start codon’ identified the start of the gene, and that there were only seven of these in the genetic code. But nobody had ever checked, this article says. Scientists from the National Institute of Standards and Technology found, to their surprise, that there are “at least 47 possible start codons, each of which can instruct a cell to begin protein synthesis.” Indeed, “It could be that all codons could be start codons.” The possibilities this opens up for expanding the complexity of RNA transcripts can only be imagined at this point.

We’ll end with one more example of the revolution in RNA functions. Scientists at Indiana University and colleagues found an example of “Hybrid incompatibility caused by an epiallele.” The open-access study, published in PNAS, “demonstrates a case of epigenetic gene silencing rather than pseudogene creation by mutation” in the lab plant Arabidopsis. Here’s a case where the RNA tail seems to wag the DNA dog:

Multicopy transgenes frequently become methylated and silenced, particularly when inserted into the genome as inverted repeats that can give rise to double-stranded RNAs. Such double-stranded RNAs can be diced into small interfering RNAs (siRNAs) that guide the cytosine methylation of homologous DNA sequences, a process known as RNA-directed DNA methylation (RdDM)…. This interesting case study has shown that naturally occurring RdDM, involving a new paralog that inactivates the ancestral paralog in trans, can be a cause of hybrid incompatibility.

Bypassing genetic mutations and natural selection, this “previously unrecognized epigenetic phenomenon” might help explain cases of apparently rapid speciation by a non-Darwinian process. We’ll leave that possibility for others to investigate.

In short, RNA has graduated from servant to master. The numerous RNA transcripts floating around in the nucleus, once thought to be genetic “noise,” may actually be the performance, like virtuosos in an orchestra bringing static notes written in DNA to life. This huge shift in thinking appears to be deeply problematic for neo-Darwinism. It sounds like a symphony of intelligent design.

- See more at: https://www.evolutionnews.org/2017/03/no-mere-bike-messenger-rna-code-surpassing-dna-in-complexity/#sthash.HkYGU29R.dpuf