Search This Blog

Friday 12 May 2017

Scientism v. science yet again.

How Naturalism Rots Science from the Head Down
Denyse O'Leary

Post-truth was the Oxford Dictionaries’ word of the year for 2016. The term “post-fact” is also heard more often   now. Oxford  tells us  that “post-fact” relates to or denotes “circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.”

Post-fact has certainly hit science. Pundits blame everyone but themselves   for its growing presence. But a post-fact and post-truth world are implicit and inevitable in the metaphysical naturalist view (nature is all there is) that is now equated with science  and often stands in for it.

Let’s start at the top, with cosmology. Some say there is a crisis  in cosmology; others say there are merely  challenges. Decades of accumulated evidence have not produced the universe that metaphysical naturalism expects and needs. The Big Bang has not given way to a theory with fewer theistic implications. There is a great deal of evidence for fine-tuning of this universe; worse, the evidence for alternatives is fanciful or merely ridiculous. Put charitably, it would not even be considered evidence outside of current science.

One response has simply been to develop ever more fanciful theories. Peter Woit, a Columbia University mathematician, is an atheist critic of fashionable but unsupported ideas like string theory (Not Even Wrong, 2007) and the multiverse that it supports. Recently, Woit dubbed 2016 the worst year ever for  “fake physics”  (as in fake news ). As he  told Dennis Horgan recently at Scientific American, he is referring to “misleading, overhyped stories about fundamental physics promoting empty or unsuccessful theoretical ideas, with a clickbait headline.”

Fake physics (he links to a number of examples at  at his blog) presents cosmology essentially as an art form. It uses the trappings of science as mere decor (the universe is a computer simulation, the multiverse means that physics cannot predict anything…). Conflicts with reality call for a revolution in our understanding of physics rather than emptying the waste basket.

Woit blames the Templeton Foundation for funding   this stuff. But Templeton caters, as it must, to an audience. Perhaps a more pressing issue is this: The need to defend the multiverse without evidence has led to a growing discomfort with traditional decision-making tools of science, for example,  falsifiability  and Occam’s razor.   And metaphysical naturalism, not traditional religion, is sponsoring this war on reality.

Can science survive the idea that nature is all there is? The initial results are troubling.  Where evidence can be ignored, theory needs only a tangential relationship to the methods and tools of science. Physicist Chad Orzel expressed disappointment with the 2014 Cosmos remake,  saying  “I find the choice to prioritize wildly speculative but vaguely inspirational material like panspermia and the whole ‘future cosmic calendar’ stuff kind of disappointing. There’s so much that they haven’t talked about yet that’s based on good, solid evidence, but we’re getting soaring vagueness.” But what if a disquieting amount of the available evidence is unwanted?

The increasingly popular idea that consciousness is an  illusion flows together naturally with the new cosmology. Contradictory theories do not seriously conflict because any resolution would just be another user illusion. Readers notice how strange the new science literature sounds but, to the extent that they accept metaphysical naturalism, they can base their objections only on personal discomfort.

What if a theory, such as intelligent design, challenges metaphysical naturalism? It will certainly stand out. And it will stand out because it is a threat to all other theories in the entire system. Merely contradictory or incoherent theories clashing against each other are not a threat in any similar way; there are just so many more of them waiting up the spout.

Could intelligent design theory offer insights? Yes, but they come at a cost. We must first acknowledge that metaphysical naturalism is death for science. Metaphysical naturalists are currently putting the science claims that are failing them beyond the reach of disconfirmation by evidence and casting doubt on our ability to understand evidence anyway.

ID is first and foremost a demand that evidence matter, underwritten by a conviction that reason-based thinking is not an illusion. That means, of course, accepting fine-tuning as a fact like any other, not to be explained away by equating vivid speculations about alternative universes with observable facts. Second, ID theorists insist that the information content of our universe and life forms is the missing factor in our attempt to understand our world. Understanding the relationship between information on the one hand and matter and energy on the other is an essential next discovery. That’s work, not elegant essays.


We will get there eventually. But perhaps not in this culture; perhaps in a later one.  Science can throw so many resources into protecting metaphysical naturalism that it begins to decline. Periods of great discovery are often followed by centuries of doldrums. These declines are usually based on philosophical declines. The prevalence of, for example, fake physics, shows that we are in the midst of just such a philosophical decline. It’s a stark choice for our day.

On the tyranny of the consensus.

Stephen Meyer: Appeals to Evolution “Consensus” Undercut Scientific Methodology
David Klinghoffer | @d_klinghoffer

“Darwin’s public defenders,” as Stephen Meyers calls them – Nye, Dawkins, Krauss, & Co. – loudly contend that evolutionary theory has “no weaknesses,” is “undeniable,” enjoys the support of a scientific “consensus,” and therefore is questioned only by science “deniers.” In a presentation at the Heritage Foundation in Washington, D.C., ahead of last month’s March for Science, Dr. Meyer debunked these claims.

You can hear him now on a new ID the Future   podcast episode. Download it here.


A significant point Meyer makes is that to invoke the idea of a “consensus” on evolution itself undercuts the scientific methodology. The latter entails spirited debate among scientists about competing hypotheses regarding how to interpret data. To shut the door on debate, as the Science Marchers would like to do, means shutting the door on science.

Thursday 11 May 2017

On name calling in the name of 'Science'

Abusing the “Anti-Science” Label — Editors of Nature Agree with Wesley Smith!
David Klinghoffer | @d_klinghoffer

On a new episode of ID the Future, Wesley Smith discusses the “anti-science” label and identifies some trends in the academy and the media that truly are inimical to science. He notes the tendency to confuse science with ethics, and to use the idea of science itself as a weapon to silence debate.

Download the episode here, or listen to it here. Wesley spoke at the Heritage Foundation in Washington, DC, along with other Discovery Institute colleagues ahead of last month’s March for Science.

Meanwhile, this is sure refreshing. Nature, the world’s foremost science journal, urges readers to cool it with the “anti-science” slur. They do so in an editorial,  “Beware the anti-science label,”  that is uncompromising in its common sense:

Antimatter annihilates matter. Anti-science, it is said, destroys what matters. And fears are increasing that anti-science forces are on the march. Indeed, on last month’s March for Science, a ‘war on science’ was frequently invoked as a reason for researchers to mobilize. Signs held aloft warned of a conflict.

True anti-science policies — the early Soviet Union’s suppression of genetics research, for example, and its imprisonment of biologists while trying to revamp agriculture — can wreck lives and threaten progress. But it’s important not to cheapen the term by overusing it. And it’s wrong for researchers and others to smear all political decisions they disagree with as being anti-science.
Well, what do you know? Sure, they throw in the expected criticisms directed at “climate denial” (strange expression — who denies that there’s a climate? — but you know what they mean). Otherwise, they are singing our tune. Just as Wesley says, disagreements about policy or ethics should not be translated into “science” versus “anti-science.”

Science is only one of many factors and interests that a thoughtful politician needs to weigh when choosing a position on a complex topic. If science sometimes loses out to concerns about employment or economics, scientists should not immediately take it as a personal slight. Rather, it is a reason to look for common ground on which to discuss the concerns and work out how science can help: creating jobs in green energy, for instance, or revamping wasteful grant programmes.

Of course, corruption and conflicts of interest can frequently motivate political decisions as well, and researchers and others should not hesitate to highlight them. But name-calling and portraying the current political climate as a war between facts and ignorance simply sows division.
Yes! As a bonus, they note the obvious that is nevertheless habitually denied, that scientists don’t agree on everything, and that’s OK:

Science does not speak with a single voice. Sit at a hotel bar during any conference and you will hear impassioned debate over what the data have to say about a certain question. Equally credentialled researchers fall out on whether carbon dioxide levels in the atmosphere have passed a tipping point, or on the health risks of sugar.

Good for you, Nature editors. When establishment pillars like yourselves finally get fed up and speak out against the weaponizing of science rhetoric to political and ideological ends, that’s a welcome and very healthy sign.

Dancing on the razor's edge?

Recognizing Life Is Different from Natural Processes, Science Balances on the Edge of ID
Evolution News @DiscoveryCSC

Scientific materialists must live in a state of cognitive dissonance. They believe everything is “natural” (within nature), but they don’t hesitate to look for decidedly “un”-natural things about life. Consider this from New Scientist  about how to detect alien life:

Now Lee Cronin, a chemist at the University of Glasgow, UK, argues that complexity could be a biosignature that doesn’t depend on any assumptions about the life forms that produce it. “Biology has one signature: the ability to produce complex things that could not arise in the natural environment,” Cronin says. [Emphasis added.]
We saw Lee Cronin last June arguing for a “radical rethink” of origin-of-life scenarios. Here, he’s thinking about generic life that could be found in space. Alien life might not be made of “amino acids, unequal proportions of mirror-image molecules, and unusual ratios of carbon isotopes, all of which are signatures of life here on Earth.” It could be totally different. Consequently, it could be missed by Earth-centric detection strategies.

Astrobiology

There is one thing that would distinguish life from non-life, Cronin reasons: its complex organization. Here, the article by Bob Holmes engages in a delicate balancing act, coming dangerously close to intelligent design:

Obviously, an aircraft or a mobile phone could not assemble spontaneously, so their existence points to a living — and even intelligent — being that built them. But simpler things like proteins, DNA molecules or steroid hormones are also highly unlikely to occur without being assembled by a living organism, Cronin says.
Now that is dancing on the edge! We presume Cronin and Holmes are being careful not to topple over into the ID camp, but their ideas are closer than the usual materialist/reductionist talk of spontaneous emergence that makes life out to be a natural byproduct of matter. There’s even a faint echo of Thomas Nagel’s appeal to common sense in Mind and Cosmos where he intuits a limit to what can credibly be called natural:

And the coming into existence of the genetic code — an arbitrary mapping of nucleotide sequences into amino acids, together with mechanisms that can read the code and carry out its instructions — seems particularly resistant to being revealed as probable given physical law alone.
Cronin proposes a method for measuring complexity that doesn’t depend on life as we know it. He counts the number of unique steps required to get a molecule. Some molecules require so few steps that they can be explained by natural causes. But for a molecule of sufficient complexity, at some point in the sequence of events in its formation, probability would demand a “life inference” if not an inference to intelligence:

Any structure requiring more than about 15 steps is so complex it must be biological in origin, he said this week at the Astrobiology Science Conference in Mesa, Arizona.
Let’s pause to consider what this means. For something to be “biological in origin,” it cannot have emerged by natural law alone. It would be “un”-natural enough to warrant the inference that life or intelligence caused it to come into being.

One might argue that purely physical things can have unique signatures as well. For instance, planetary scientists find signatures of volcanism on the surfaces of Mercury, Io, and even Pluto and Ceres (ice volcanism). What’s the difference, then, between looking for a biosignature in one scientific context and a heat signature in another context? Clearly, it must be the degree of complexity. Heat is common everywhere just by normal thermodynamics. But some complex phenomena are never observed to emerge through natural law alone.

With volcanoes, laws of heat and buoyancy are sufficient to figure out how material makes its way up through a crust. If a volcano had to take a sequence of 15 unique steps, however, then we might be justified in looking into non-natural causes at work. That’s not likely, since well-known laws of physics can account for eruptions, and we witness volcanoes all the time. We never witness unguided chemical reactions going through 15 or more unique independent steps to arrive at a complex molecule, much less to produce a coding system with transcription, translation, and reproduction. That’s why the “complexity” Cronin tries to measure must equate to specified complexity.

SETI

The distinction between natural and intelligent causes becomes especially clear in SETI. In another piece in New Scientist, Geraint Lewis from the University of Sydney discusses not just bio-signatures but mind-signatures. Frustrated by the silence of traditional SETI, and realizing that the Fermi paradox (the “Where is everybody?” question posed by Enrico Fermi) has never been answered, he suggests a different search strategy closer to home: finding the remains of extinct civilizations in our own celestial backyard.

This apparent absence of evidence is known as the Fermi paradox. It has led to considerable head-scratching for more than half a century. Now, U.S. astronomer Jason Wright has a new twist on it, rephrasing Fermi’s question to: “Where was everybody?” In particular, one answer could be our own solar system. He wonders if “prior indigenous technological species” arose here, and what trace might they have left behind?
David Klinghoffer commented on Wright’s idea here last week. ID advocates should feel right at home with this strategy. It’s like archaeology. We know it’s possible in many cases to separate natural causes from intelligent causes when examining artifacts (see “Intelligent Design in Action: Archaeology”). It’s not even necessary to know anything about the builders to infer that an intelligent cause probably brought a structure into existence (see here for an example). Extending the same reasoning, we can expand the search space to other nearby worlds.

If they existed here, or on the other planets and moons, what signs should we look for and where? In the crushing environment of Venus, and the churning plate tectonics of Earth, buildings and monuments would be eroded and destroyed on such long timescales. But on slow-changing Mars, our moon, and possibly the frozen satellites of outer solar system planets, the tunnels and cities of ancient lost civilisations could survive buried under the soil and ice.
Lewis implies that the inference to intelligence is not only intuitive but robust. Tunnels, cities, and habitable structures are decidedly “un”-natural because, applying the same reasoning used by Cronin, too many unique steps would be required for their origin. Lewis is even willing to lower the bar for design detection:

Other signatures would be more durable still, with the slow decay of nuclear power sources apparent for billions of years, with distinct mixtures of elements and radioactivity.
We saw a similar type of reasoning used by experts in “nuclear forensics” a while back. Scientists can determine, through intuition supported by probability calculations, that certain things don’t “just happen” naturally.

Lewis admits that his thoughts about extinct civilizations are “pure speculation” at this point. But he implies that in principle one can distinguish natural causes from intelligent causes. That’s all intelligent design tries to do. He says, “When we finally start digging into the dirt of other worlds, we might uncover definitive signs that someone else has been there before.” This, too, dances right up to the perilous edge of ID.

Conclusion

ID is the science of determining “definitive signs” that “someone” (a mind) has been at work; a mind with the intelligence, intention, and ability to take natural materials and organize them into complex structures unreachable by unguided natural processes. For SETI, the inference to intelligent causation is intuitive and direct. For astrobiology, the inference is indirect, but logically similar: a biosignature points to a non-natural chain of events that had a goal and a purpose (life).


These scientists may not call it intelligent design, but ID is alive and well in their work. The challenge is to help them recognize it.

Tuesday 9 May 2017

Contra the consensus III

Pro the consensus.

Another day another zombie apocalypse.

Reviewing Zombie Science, Sean McDowell Asks the Toughest Question About Evolutionary Icons
David Klinghoffer | @d_klinghoffer

The science establishment that silences evolution skeptics in academia might have a shred of a plausible case to make in its defense…if the science itself were on their side. But of course it’s not, as Jonathan Wells explains in his new book 
 Zombie Science: More Icons of Evolution.
Sean McDowell reviews the book at The Stream, and he hits the main points admirably. Such as: If the icons were simply mistakes, innocent blunders, the equivalent of typos, why do the science textbooks retain them year after year? That is really the toughest question, that folks like Jerry Coyne won’t touch.

If these icons were innocent mistakes, then biologists would have eagerly corrected them, right? Since they persist, says Wells, there must be something else besides the evidence that keeps them “alive.”

Publishers could possibly be forgiven if this was the only mistake.

For instance, Darwin considered embryological development the best evidence for his theory. He cited drawings from the German Biologist Ernst Haeckel, which allegedly reveal how the development of various vertebrate animals mirrors the larger evolutionary story of common descent. Yet it has been known since at least 1997 that the Haeckel’s drawings were cherry-picked, inaccurate and fake. In fact, Wells concludes, “The real issue is that Haeckel’s drawings omitted half of the evidence — the half that doesn’t fit Darwin’s claim that embryos are most similar in their early stages” (58).

Nonetheless, Haeckel’s drawings continue to appear in textbooks published after 2000, such as Donald Prothero’s 2013 textbook Bringing Fossils to Life. And the 2016 textbook Biology, by Mader and Windelspecht, uses re-drawn versions of Haeckel’s embryos that make the same (mistaken) point.

Publishers could possibly be forgiven if this was the only mistake. But as Wells indicates, similar misrepresentations continue for other “icons” including the Miller-Urey experiment, Archaeopteryx, peppered moths, Darwin’s finches and more. Like zombies, these “evidences” simply won’t die.
No, there’s more going on than mere publishing blunders. Darwin advocates are trying to persuade their audience, including impressionable young people, and the evidence is shaped as needed to suit the purpose.

But the bottom is really out of the boat. McDowell notes, for one thing, the challenge of epigenetics:

One of the most interesting sections of the book was the discussion of epigenetics. Broadly speaking, epigenetics refers to the various factors involved in development, including genetics.

In the 20th century, the dominant view of biology was that evolution proceeded genetically from DNA to RNA to proteins to us. As a result, evolution could advance through genetic mutations that accumulate over time.

But according to Dr. Wells, there are significant carriers of information beyond DNA sequences. Biological membranes are one example. In other words, the claim that the genome carries all the information necessary to build an organism is false. As a result, mutations or changes in DNA alone are not enough to build new function and form.

Given the premise of neo-Darwinism, that evolution builds novelties precisely by mutation and selection, that would seem to seal the case. Wouldn’t it be interesting to see a scientist who’s a Darwin apologist honestly confront the argument in Dr. Wells’s book? That would be just fabulous. Don’t hold your breath.

Biomimetics v. Darwin.

Leading Biomimetic Scientist: Don’t Let Materialism Trump Evidence
Jonathan Witt

Here’s another ID-goes-international story, hard on the heels of the Discovery Institute-Mackenzie launch in Brazil last week: A groundbreaking South Korean scientist, Dr. Seung-Yop Lee, has come out against the practice of ruling intelligent design hypotheses out of bounds before considering the evidence.

“As a biomimetic researcher, I wonder how the complex photonic nanostructures of insects first arose,” he writes. “Biological designs are sparking a gold rush of innovation for engineers and scientists, but by and large, only materialistic explanations for these biological structures are allowed in the biomimetic field.”

Lee is a professor in the Department of Mechanical and Biomedical Engineering at Sogang University in Seoul and a leading figure in the field of biomimetics.

Lee’s recent reading of Jonathan Wells’s new book, Zombie Science: More Icons of Evolution, precipitated the comments. “In his excellent new book, Zombie Science, Jonathan Wells urges another approach to scientific investigation,” Lee wrote. “Don’t let materialistic philosophy trump the evidence, Wells says. Instead, follow the evidence wherever it leads.”

An article in the journal Nature reports on one of Dr. Lee’s biomimetic innovations, “a film that changes color according to the ambient humidity.” According to the article, the invention was “inspired by the natural design of the Hercules beetle” and paves the way to the development of a sensor that “would not need electricity and could be used in small medical or agricultural devices.”

Professor Lee’s success at making design breakthroughs by looking for inspiration from engineering marvels in the biological realm appears to have left him impatient with dogmatic materialism in origins biology, and sympathetic to the argument Wells makes in his new book. “The title, Zombie Science, is quirky and colorful,” Lee said, “but Wells uses it to highlight a real problem: Vivid ‘proofs’ of evolution still lumber along even after contrary evidence has killed them off and mainstream biologists have renounced them.”

Zombie Science is a sequel to Dr. Wells’s 2001 book, Icons of Evolution. “Wells brings readers up to date on the original ten icons, and debunks six more,” Lee comments in his endorsement of the book. “Wells argues that these debunked icons persist in textbooks and elsewhere only because they support a dominant evolutionary paradigm and a materialistic dogma. Zombie Science is a timely call for reform.”


Evolution News has reported  herehereherehere and  here on just a few of the many veins being mined in the field of biomimetics. Find many more articles on the subject by plugging “biomimetics” into the website’s search field.

Yet more preDarwinian tech v. Darwin.

Molecular Machines Reach Perfection
Evolution News @DiscoveryCSC

ATP synthase is in the news again, and it’s even better than before. Before hearing the news, it might be worthwhile to review our animation  of this tiny rotary engine that powers all life, from bacteria to humans. You’re running on quadrillions of these little motors right now. The news is that they are perfect.

One doesn’t often see the word “perfect” in a science paper, but four Japanese researchers are unabashed, using the word 13 times in their paper in the Proceedings of the National Academy of Sciences, , including the title: “Perfect chemomechanical coupling of F0F1-ATP synthase.”

Peter D. Mitchell, a Nobel awardee in 1978, proposed that F0F1-ATP synthase converts energy between electrochemical potential of H+ across biological membrane…, which is established by respiratory chain complexes, and chemical potential of adenine nucleotide [ΔG(ATP)]. However, the efficiency of the energy conversion has been a matter of debate for over 50 years. In this study, with a highly reproducible analytical system using F0F1-ATP synthase from thermophilic Bacillus, apparently perfect energy conversion was observed. Mitchell’s prediction thus has quantitative evidence. [Emphasis added.]
You can’t get better than perfect. This means that every proton (H+) coming into the machine, driving its rotation, yields 100 percent conversion of its energy into production of ATP. Can you think of any man-made motor that even approaches this kind of efficiency? Hardly. At our macro level of engineering, artificial motors waste energy through heat, friction, and escape of fuel to the environment. The second law of thermodynamics forbids perfection. Somehow, at the scale of nanometers, ATP synthase engines get maximum bang for their proton buck — with no loss at all.

The debate about ATP synthase energy efficiency centered on the numerical mismatch between the two halves of the machine. The F0 part, where protons enter, has 10 units called c subunits arranged like orange peels that rotate around a central axis. The F1 part, by contrast, has 3 units in pairs, called β subunits, where ATP synthesis takes place (the two halves are linked by a central stalk called the γ-subunit that works like a camshaft). This 10/3 non-integer pairing between F0 and F1 was unexpected, leading biophysicists to assume there must be some slippage in the camshaft during every rotation. Slippage would waste some of the proton motive force (pmf), reducing the efficiency.

One way to find the answer is to compare the input to the output as accurately as possible. These scientists rigged a proteoliposome from a thermophilic (heat-loving) bacterium in a new way that allowed them to reliably measure the incoming pmf as well as the outgoing production of ATP.

In this report, we used this system to determine the actual H+/ATP ratio. The results show the perfect agreement of H+/ATP ratio to c/β, indicating tight coupling efficiency of proton translocation in Fo and ATP synthesis/hydrolysis in F1. In addition, kinetic and energetic equivalence of transmembrane difference of pH (ΔpH) and electric potential (Δψ) was supported with unprecedented certainty over a wide range of pmf values.
The team carefully eliminated all contamination, ran the tests for tens of hours, and reduced error to achieve unprecedented levels of accuracy. “A long-anticipated, but unproved, conception that F0F1 achieves a perfect coupling between transmembrane H+ translocation and ATP synthesis/hydrolysis has direct experimental evidence now,” they conclude.

How is this even possible? Isn’t there slippage? Isn’t there twisting force of torque as the camshaft presses against the β subunits in F1? And what about other versions of ATP synthase in other organisms that have 8, 12, or 14 c subunits in F0? They address these questions in the final paragraph of the Discussion:

In a thermodynamic view, the perfect coupling means perfect energy conversions between chemiosmotic (H+ translocation), mechanical (rotary motion), and chemical energy (ATP synthesis/hydrolysis). A near-perfect energy conversion from ATP hydrolysis to rotary motion of γ-subunit in F1 was recently demonstrated in a thermodynamically defined manner, and this study predicts that other conversions should also be highly efficient. In a mechanistic view, the perfect coupling means that there is no slippage within and between F0 motor and F1 motor. Atomic structures of F1 are convincing that rotary motion of the γ-subunit could not occur without conformation change of the catalytic subunits. Structural basis for rotation of F0 motor without slippage has been suggested recently by atomic structures of whole F0F1 revealed by cryoelectron microscopy. The connection of the two motors should also be strong enough to endure the twisting force of torque. Crystal structures of F1·c-ring complexes indicate that the connection appears to be held by a small number of interactions between the bottom portion of F1’s rotor and polar loops in the c ring. Interestingly, this connection must be versatile, because the chimera TF0F1 with replaced F0 from Propionegenium modestum that has 11 c subunits shows good coupled activity.
This is a remarkable thing. Perfect — yet versatile! You can substitute a different c-ring into F0 and still get “good coupled activity.” Try that with man-made engines!

Other Perfect Scores

Kinesin, the walking machine (see our animation), is another “perfect 10” performer. Like ATP synthase, it converts chemical energy into mechanical energy. It even has what scientists call a “power stroke” as it walks. Tomonami Sumi from Okayama University in Japan compared the machine’s walking efficiency to its ATP consumption. Publishing in Nature Scientific Reports, he found that “the ratio of the number of ATP hydrolysis to the number of steps advanced suggests a tight coupling between the two.” Tight coupling; we heard that in the previous story. Although he doesn’t use the word perfect, he speaks admiringly of the “extraordinary motor properties” of kinesin. It appears that the Japanese are less inhibited about using the d-word design. Sumi’s title is, “Design principles governing chemomechanical coupling of kinesin.”

Cohesin and condensing are proteins that help keep DNA organized. An interesting article written like a mystery story in Nature News shows how scientists are trying to figure out if they work like motors. Writer Elie Dolgin calls it “DNA’s secret weapon against knots and tangles.” Something is seen extruding loops in DNA, working to “keep local regions of DNA together, disentangling them from other parts of the genome and even giving shape and structure to the chromosomes.” But whatever it is, it has to be beyond belief if MIT biophysicist Leonid Mirny’s model is correct:

For one thing, the identity of the molecular machine that forms the loops remains a mystery. If the leading protein candidate acted like a motor, as Mirny proposes, it would guzzle energy faster than it has ever been seen to do. “As a physicist friend of mine tells me, ‘This is kind of the Higgs boson of your field’,” says Mirny; it explains one of the deepest mysteries of genome biology, but could take years to prove.
The race is on to discover what kind of motor is consuming ATP to push and pull DNA. Loop extrusion not only prevents knots and tangles, it regulates gene expression by keeping parts of genes in proximity. We expect this mystery will have a “perfect” ending.

Lastly, that familiar icon the bacterial flagellum shows a new trick up its sleeve. How does the driveshaft know when to stop growing? A paper in Science shows that the “most efficient machine in the universe,” as Howard Berg calls it, has a perfect solution: it grows till it touches the periplasm (outer membrane layer). As we marvel at the engineering, let’s give evolution the credit, shall we?

The bacterial flagellum exemplifies a system where even small deviations from the highly regulated flagellar assembly process can abolish motility and cause negative physiological outcomes. Consequently, bacteria have evolved elegant and robust regulatory mechanisms to ensure that flagellar morphogenesis follows a defined path, with each component self-assembling to predetermined dimensions. The flagellar rod acts as a driveshaft to transmit torque from the cytoplasmic rotor to the external filament. The rod self-assembles to a defined length of ~25 nanometers. Here, we provide evidence that rod length is limited by the width of the periplasmic space between the inner and outer membranes. The length of Braun’s lipoprotein determines periplasmic width by tethering the outer membrane to the peptidoglycan layer.
Science Daily adds, “To function properly and propel the bacterium, the flagellum requires all of its components to fit together to exacting measurements.” The growing “driveshaft” somehow feels the outer layer and knows to stop growing. “The rod needs to touch the inside of the outer membrane,” one of the authors says. “So, if the outer membrane is farther away, the rod has to grow there to meet it.” The versatile growth process yields a perfect fit.


If you can think of any machine in your experience that is perfect yet flexible, it probably did not come about through blind, aimless natural processes. Let’s stop allowing Darwinians to get away, unchallenged, with saying they “have evolved” to perfection.

Monday 8 May 2017

On the foundation of reality.

An extrapolation revisited. III

The Nylonase Story: The Information Enigma
Ann Gauger


Editor’s note: Nylon is a modern synthetic product used in the manufacturing, most familiarly, of ladies’ stockings but also a range of other goods, from rope to parachutes to auto tires. Nylonase is a popular evolutionary icon, brandished by theistic evolutionist Dennis Venema among others. In a series of three posts, of which this is the third, Discovery Institute biologist Ann Gauger takes a closer look. Look here for the first and second posts.

Returning to the story of the nylonase gene and the problem of where new information comes from, I’d like to make the point that there is a reason that molecular geneticist and evolutionary biologist Susumu Ohno made his hypothesis about a frame-shift having produced nylonase. Ohno is famous for his hypothesis that gene duplication and recruitment are the chief means by which “new” proteins are made — he wrote a  famous book about it.

But he also knew that copying and tinkering weren’t enough, that there had to be a way to generate genuine de novo information, brand new coding sequence for genuinely new proteins, in order to account for all the diversity of information that must have been necessary as life became more complex. New proteins had to come from somewhere.

Ohno had an idea. He thought coding sequences made up of oligomeric repeats might allow there to be several alternate ways to read the same sequence. For an explanation of alternate reading frames, see my earlier post, The Nylonase Story: How Unusual Is That?

As a potential example, Ohno proposed nylB, the gene for nylonase. This gene has certain characteristics that make it plausible that a frameshift could have occurred, characteristics I described in that second post in this series, such as nylB’s sequence being GC-rich and deficient in TAs. These two characteristics reduce the chances of having stop codons, in any frame.

Ohno thus proposed that nylonase arose after a frameshift mutation in a perhaps nonfunctional, prior-coding sequence, resulting in an entirely new coding sequence with nylonase activity. The only reason Ohno could make this proposal was because nylB, the gene that codes for nylonase, has at least two potential open reading frames in the forward direction — the hypothetical “original” one proposed by Ohno from before any hypothetical T insertion took place, and the actual one that codes for nylonase now.

Ohno published  his paper  in 1984. In 1992, Yomo et al . noticed that one frame in the antisense direction of nylonase has no stop codons either. It also lacks a start codon, though, so Yomo et al. called it a non-stop frame (NSF) instead of an open reading frame (ORF). The probability of finding a DNA sequence with an ORF on the sense strand and a full NSF on the antisense strand are small. But surprisingly, not only does nylB have an NSF on the antisense stand, nylB has another fully overlapping NSF in the forward direction. That’s two NSFs plus the actual ORF for nylonase (I’m not counting the hypothetical frame-shifted “original” ORF, since that frame actually has several intervening stops. (See “The Nylonase Story: When Facts and Imagination Collide.”) That means nylB has no stop codons in three out of six frames.

The chances of avoiding a stop codon in three out of six frames are very low. Our simulation (described in the previous post) showed that the probability is very small indeed. For an ORF 900 nucleotides long to have two NSFs at 70 percent GC is 9 out of 28,603 or .0003. (See there for details.) If these figures are recast to include the total number of random trials required to get an ORF of the proper length and GC content in the first place, and then with two NSFs, then the probability would be nine out of ten million trials, or 0.0000009. No organisms have ten million genes (we only have about twenty thousand), and Flavobacterium certainly doesn’t. But it’s not outside the realm of possibility that such sequences should exist by pure chance somewhere. After all, nylB does. But take the following into consideration.

In addition, beyond the first appearance of such a sequence, there would also need to be some way to prevent random mutation from introducing any stop codons over evolutionary time, in any of the three open frames. Purifying selection would normally be invoked in such a case. Organisms that develop harmful mutations in genes that encode functional gene products — things that are important for the organism’s survival — are less successful at reproducing, and so organisms carrying harmful mutations tend to disappear from the population (they are sickly or dead). However, purifying selection by definition has no effect on non-functional sequences. The fact that stops are prevented from accumulating in nylB NSFs implies that all three frames are functional. No function has been reported for the NSFs, however. They have no ATGs in the vicinity and so may be non-coding (though it must be acknowledged there are alternate start codons in the vicinity). In addition, it has been reported that the pOAD2 plasmid on which nylB is located is non-essential. It can be cleared from its host with no effect, except the loss of the ability to degrade nylon.

One possibility is that nylB has a secondary DNA or RNA-based function that requires its sequence to be nearly completely conserved. It would have to be a very specific sequence requirement to prevent the accumulation of stop codons in three frames, though. We get a hint that the cause is not sequence specificity, because the nylB and nylB′ genes of Flavobacterium differ by 47 amino acids, and the nylB gene of Pseudomonas has only about 35 percent identity according to reports, yet all three lack stops in the anti-sense frames in addition to their coding sequence (based on available sequence information).

Yomo et al., who first reported the anti-sense NSF in nylB, were amazed and puzzled by the existence of anti-sense NSFs in nylB genes of multiple species.

The probability of the presence of these NSFs on the antisense strand of a gene is very small (0.0001-0.0018) [we observed .0001]. In addition, another gene for nylon oligomer degradation [Pseudomonas nylB] was found to have a NSF on its antisense strand, and this gene is phylogenetically independent of the [Flavobacterium] nylB genes. Therefore, the presence of these NSFs is very rare and improbable. Even if the common ancestral gene of the nylB family was originally endowed with an NSF on its antisense strand, the probability of this original NSF persisting in one of its descendants of today is only 0.007. Unless an unknown force was maintaining the NSF, it would have quickly disappeared by random emergences of chain terminators. Therefore, the presence of such rare NSFs on all three antisense strands of the [three member] nylB gene family suggests that there is some special mechanism for protecting these NSFs from mutations that generate the stop codons. Such a mechanism may enable NSFs to evolve into new functional genes and hence seems to be a basic mechanism for the birth of new enzymes. [Emphasis added.]
Later on, they continue:

… the lifetime of a nonessential NSF is very short, and it is impossible for such a NSF to persist for a long period of evolution. Therefore, we strongly suggest that the existence of the NSFs on all the three antisense strands of the nylB gene family points to an unknown force that is preserving these nonessential NSFs; otherwise, they would have quickly disappeared by random emergences of chain terminators.
Ohno himself was aware of this work and in some sense supported it. He was the one who communicated it to the Proceedings of the National Academy of Sciences. What he made of it I don’t know.

The highlighted proposal in the above quotes is on the face of it antithetical to the materialist worldview. What kind of force can preserve apparently non-functional NSFs? Certainly a mechanism to preserve non-functional sequences so that they might some day evolve into functional genes is more suggestive of design than evolution. It would take a fair amount of foresight on the part of evolution, don’t you think, to develop a mechanism to prevent stop codons from interrupting non-functional NSFs, all for some possible future benefit?

All this speaks to the origin and preservation of potential information, information such as Ohno was looking for, but by a means different than he foresaw. We have returned full circle.  Explaining nylonase does not require a frameshift, as I have shown in the first post — nonetheless nylonase’s gene is an unusual sequence. Getting overlapping code in three frames might happen in very rare circumstances, but keeping the NSFs open in the apparent absence of selection to maintain them would seem to be highly, highly unlikely. So we have extreme rarity piled upon rarity. Bear in mind also, that whatever the peculiar characteristics of the nylB gene sequence, it must also encode a functional, stably folded enzyme, which is another constraint.

Why am I going on and on about nylonase? It has to do with problem of the origin of novelty. Are frameshifts a possible source of new functional information? Might a sequence with alternate frames stay open by chance or be created by chance over evolutionary time? It’s a highly improbable event, but not impossible, I suppose. Might the alternate frames someday be material for frameshifted novel proteins, provided they stay open? They might theoretically be a reservoir for future proteins, but given what we know about the rarity of these kinds of sequences and the rarity of protein folds in sequence space, the possibility of generating an entire new protein fold from a frameshift is extremely, extremely, extremely low, and would depend on a highly unusual starting sequence tailored in advance for a particular functional specificity. In other words it would need to be designed.

In addition, even should such a sequence exist, it would not long persist in the face of neutral evolution. According to neo-Darwinism there is no magic molecular bouncer who throws out inactivating mutations before they can do their damage to a potential gene. Or to use another metaphor, evolution does not bank potentially useful sequences for future use. For it to do so would require foresight, an idea antithetical to evolutionary theory. Thus, any putative frame-shifted sequences that have been shown to have a functional role are better explained by design than by chance and necessity.

Should anyone disagree with my argument above, I’d like to point out that for a long time it was the standard belief among evolutionary biologists (and geneticists) that random sequence could not generate a functional protein. Frameshifted proteins are almost universally disrupted by stop codons (unless they happen to have an NSF or two like nylB). And even if they aren’t interrupted, the new sequence will be unlikely to fold into a stable protein, given the rarity of functional folds in sequence space (see the first post).

As an aside, as one of the curious facts of history, the disruptive properties of frameshift mutations were used to discover the triplet nature of the genetic code. Says Sir F.H.C. Crick in a lecture on the genetic code he gave in 1964:

This [the ability to combine mutations] has enabled us to tackle the question: is it really a group of three that makes up a codon? The basic idea is the following. We are able to pick up mutants which we believe (from the way they behave in various contexts) are not merely the change of one base into another, but are either the addition or a deletion of a base or bases. What happens when you have a genetic message and you put in an extra base? The reading starts from the beginning until it comes to that point and from there onward the whole of the message is read incorrectly, because it is being read out of phase. In fact we find that these [frameshift] mutants are completely inactive — this is one of our bits of evidence that they are what we say they are. You can pick up a number of such mutants and can put them together, by genetic methods, into the same gene. For example you can put together two of them. Such a gene would be read correctly until it reached the first addition, and then it would be out of phase. When the reading came to the second addition it would [be] read out of phase again, and so the whole of the rest of the message would be read incorrectly. Now it so happens that the left-hand end of this gene is not terribly important for its function. We can actually delete it and the gene will work after a fashion. In this region we have constructed, by genetic methods, a triple mutant, using three mutants all of the same type, and we have found that the gene will nevertheless function fairly normally.

This result is really very striking. Each of the three different faults, used singly, will knock out the gene. You can put them together in pairs in any combination you like, but then the gene is still quite inactive. Put all three in the same gene and the function comes back. We have been able to do this with a number of distinct combinations of three mutants (Crick et al., 1961).
Crick and others found that when three single base frameshift mutations of a particular gene, each completely disruptive on its own, were combined into the same gene, the three insertions together restored the frame enough for the protein to function again! Hence the code must be based on threes.

The sheer improbability of getting a functional enzyme from frameshifted random sequence has been the accepted view for a long time. It is only recently, in the era of big genomic data, that it has begun to be accepted that new proteins do occasionally arise by frame-shift mutation. The reason? It’s because we find examples in the genome that appear to be products of such events, based on sequence comparisons.

The proteins apparently affected by such frameshifts in the genome are often transcription factors or membrane proteins involved in gene regulation. The apparent frameshift often affects alternative splicing and changes the coding sequence over an exon or so; alternatively, the frameshift affects the end of the protein, resulting in truncation. The fact that such a mutation is located near the protein’s end reduces the amount of disruption to the protein. Many such mutations have been documented to cause disease, however. For a demonstration, just use Google Scholar to search for “frameshift.”

At this point, the chief question that should be in everyone’s mind is, “Can evolution by neo-Darwinian means produce new functional information from frame-shifted sequence? Or are other explanations more likely?”

It boils down to this. Do we say that frameshifted functional proteins are easy to generate, because after all, they exist? Or do we acknowledge that such proteins are not easy to generate and so may be evidence for design?

To reiterate, it used to be standard knowledge that frameshift mutations were always bad. Disruptive. So, for example:

More radical mutational events, such as insertions and deletions that change the reading frame — frameshift mutations — are generally considered to be detrimental (e.g. by causing nonfunctional transcripts and/or proteins, through premature stop codons) and of little evolutionary importance, because they seriously alter the sequence and structure of the protein.
But now it has become popular to offer frameshifts as a quick way to get novelty. I am pretty sure it all began with Ohno, who said:

It has recently occurred to me that the gene started from oligomeric repeats at its certain stage of degeneracy (base sequence diversification) [nylB] can specify a truly unique protein from its alternative open reading frame.
Now the meme has spread. From the Abstract of a paper documenting the “Frequent appearance of novel protein-coding sequences by frameshift translation,” we hear that “Major novelties can potentially be introduced by frameshift mutations and this idea can explain the creation of novel proteins.” And how do they defend the possibility of a functional frameshift? “Some cases of recent evolution of new genes via frameshift have been reported. For example, in bacteria the sudden birth of an enzyme that degrades manmade nylon oligomers was explained by a frameshift translation of a preexisting coding sequence.”

Sigh. The record needs to be corrected. (See my first post, “The Nylonase Story: Where Fact and Imagination Collide.”)

Let us close by considering the nature of the argument being made concerning proposed frameshifts. The fact concerning such proposed frameshifts is that there are sequence similarities between two stretches of DNA, where one part appears to be frameshifted with respect to the other.

Notice that the argument used to explain the appearance of novel genes by frameshift uses a form of inference known as abduction, where one reasons from present effects to past causes.

The surprising fact A is observed.
If B were true, then A would be a matter of course.
Hence, there is reason to suspect that B is true.1
In other words:

The surprising fact of novel genes apparently arising by frameshift is observed.
If it is easy to get new functions from random sequence, then it is a matter of course that frameshifts can produce functional proteins.
Hence it is easy to get new functional proteins from random sequences
Abductive arguments are very weak. The problem is that there can be multiple competing causes that explain the observed effects. The only way to strengthen the argument is to rule out all other competing causes. And design is a particularly strong competing hypothesis. We know design is a cause capable of producing the effect in question, namely the generation of new functional proteins by the addition of frame-shifted code. In fact, given what we know about the rarity of functional proteins in sequence space, as demonstrated experimentally here, here, and here, and theoretically here, design is a better explanation than the neo-Darwinian one.

Until someone demonstrates experimentally, in real time, that a frameshift mutation can generate a new functional protein (not just a loss of function) by undirected processes, the inference that it is easy to do so is unjustified. And nylonase is not that demonstration.2

References:

(1) Stephen C. Meyer, Of Clues and Causes: A Methodological Interpretation of Origin of Life Studies. PhD dissertation (Cambridge: Cambridge University, 1990).
Charles S. Peirce, “Abduction and Induction,” In The Philosophy of Peirce, edited by J. Buchler (London: Routledge, 1956), 150–154. Charles S. Peirce, Collected Papers, edited by Charles Hartshorne and P. Weiss. 6 vols. (Cambridge, MA: Harvard University Press, 1931–1935).


(2) In a future post, I will discuss experiments that attempt to demonstrate that random sequence can perform simple functions.