Search This Blog

Saturday, 6 January 2018

Playing God? III

Lady math gives Darwinism yet another rebuff?

Peer-Reviewed Science: A “Mathematical Proof of Darwinian Evolution” Is Falsified

Due to the tradition of professional scientific writing, major developments in scientific literature often arrive muffled in language so bland or technical as to be totally missed by a general reader. This, along with the media’s habit of covering up for evolution, is how large cracks in the foundation of Darwinism spread unnoticed by the public, which goes on assuming that the science is all settled and will ever remain so.

A case in point is a recent article in the Journal of Mathematical Biology, a significant peer-reviewed publication from the influential publisher Springer. The title of the article announces, The fundamental theorem of natural selection with mutations.”

Including a verb would, presumably, be too much of a concession to populist sensationalism. Yet the conclusion, if not sensational, is certainly noteworthy.

Generations of students of biology and evolution have learned of the pioneering work of Ronald A. Fisher (1890-1962). A founder of modern statistics and population genetics, he published his famous fundamental theorem of natural selection in 1930, laying one of the cornerstones of neo-Darwinism by linking Mendelian genetics with natural selection. Wikipedia summarizes, “[T]his contributed to the revival of Darwinism in the early 20th century revision of the theory of evolution known as the modern synthesis.”

Fisher’s theorem, offered as what amounts to a “mathematical proof that Darwinian evolution is inevitable,” now stands as falsified.

His idea is relatively easy to state. It goes:

The rate of increase in fitness of any organism at any time is equal to its genetic variance in fitness at that time.

His proof of this was not a standard mathematical one; fitness is not rigorously defined, and his argument is more intuitive than anything else. The theorem addresses only the effects of natural selection. Fisher did not directly address any other effect (mutation, genetic drift, environmental change, etc.) as he considered them to be insignificant. Later mathematicians took issue with Fisher’s lack of rigor, some at considerable length. But the omission of the effects of mutation got the most attention.

Now along come mathematician William F. Basener and geneticist John C. Sanford who propose an expansion of the fundamental theorem to include mutations. Basener is a professor at the Rochester Institute of Technology and a visiting scholar at the University of Virginia’s Data Science Institute. Sanford is a plant geneticist who was an associate professor at Cornell University for many years. He is an editor of the volume Biological Information: New Perspectives (World Scientific, 2013).  The Journal of Mathematical Biology is the official publication of the European Society for Mathematical and Theoretical Biology.

Basener and Sanford expand the Fisher model to allow both beneficial and deleterious mutations, following and extending earlier work. They use zero mutation levels to test their model’s agreement with Fisher’s. They establish that there is an equilibrium fitness level where selection balances the mutational effects. However, if mutations at biologically plausible levels are used, overall fitness is compromised. In some cases this leads to “mutational meltdown,” where the effect of accumulated mutations overwhelms the population’s ability to reproduce, resulting in extinction.

Extinction is the opposite of evolution. They conclude:

We have re-examined Fisher’s fundamental theorem of natural selection, focusing on the role of new mutations and consequent implications for real biological populations. Fisher’s primary thesis was that genetic variation and natural selection work together in a fundamental way that ensures that natural populations will always increase in fitness. Fisher considered his theorem to essentially be a mathematical proof of Darwinian evolution, and he likened it to a natural law. Our analysis shows that Fisher’s primary thesis (universal and continuous fitness increase) is not correct. This is because he did not include new mutations as part of his mathematical formulation, and because his informal corollary rested upon an assumption that is now known to be false.

We have shown that Fisher’s Theorem, as formally defined by Fisher himself, is actually antithetical to his general thesis. Apart from new mutations, Fisher’s Theorem simply optimizes pre-existing allelic fitness variance leading to stasis. Fisher realized he needed newly arising mutations for his theorem to support his thesis, but he did not incorporate mutations into his mathematical model. Fisher only accounted for new mutations using informal thought experiments. In order to analyze Fisher’s Theorem we found it necessary to define the informal mutational element of his work as Fisher’s Corollary, which was never actually proven. We show that while Fisher’s Theorem is true, his Corollary is false.

In this paper we have derived an improved mutation–selection model that builds upon the foundational model of Fisher, as well as on other post-Fisher models. We have proven a new theorem that is an extension of Fisher’s fundamental theorem of natural selection. This new theorem enables the incorporation of newly arising mutations into Fisher’s Theorem. We refer to this expanded theorem as “The fundamental theorem of natural selection with mutations”.

After we re-formulated Fisher’s model, allowing for dynamical analysis and permitting the incorporation of newly arising mutations, we subsequently did a series of dynamical simulations involving large but finite populations. We tested the following variables over time: (a) populations without new mutations; (b) populations with mutations that have a symmetrical distribution of fitness effects; and (c) populations with mutations that have a more realistic distribution of mutational effects (with most mutations being deleterious). Our simulations show that; (a) apart from new mutations, the population rapidly moves toward stasis; (b) with symmetrical mutations, the population undergoes rapid and continuous fitness increase; and (c) with a more realistic distribution of mutations the population often undergoes perpetual fitness decline.

Is this unfair to a historical figure? What about models developed after Fisher?

In the light of Fisher’s work, and the problems associated with it, we also examined post-Fisher models of the mutation–selection process. In the case of infinite population models, what has commonly been observed is that populations routinely go to equilibrium or a limit set — such as a periodic orbit. They do not show perpetual increase or decline in fitness, but are restricted from either behavior because of the model structure (an infinite population with mutations only occurring between pre-existing genetic varieties). On a practical level, all biological populations are finite. In the case of finite population models, the focus has been upon measuring mutation accumulation, as affected by selection. Finite models clearly show that natural populations can either increase or decrease in fitness, depending on many variables. Not only do other finite mathematical population models show that fitness can decrease — they often show that only a narrow range of parameters can actually prevent fitness decline. This is consistent with very many numerical simulation experiments, numerous mutation accumulation experiments, and observations where biological systems have either a high mutation rate or a small population size. Even when large populations are modeled, very slightly deleterious mutations (VSDMs), can theoretically lead to continuous fitness decline.

The final blow comes wrapped in compliments:

Fisher was unquestionably one of the greatest mathematicians of the twentieth century. His fundamental theorem of natural selection was an enormous step forward, in that for the first time he linked natural selection with Mendelian genetics, which paved the way for the development of the field of population genetics. However, Fisher’s theorem was incomplete in that it did not allow for the incorporation of new mutations. In addition, Fisher’s corollary was seriously flawed in that it assumed that mutations have a net fitness effect that is essentially neutral. Our re-formulation of Fisher’s Theorem has effectively completed and corrected the theorem, such that it can now reflect biological reality.

What they mean to say is stated most bluntly earlier in the article:

Because the premise underlying Fisher’s corollary is now recognized to be entirely wrong, Fisher’s corollary is falsified. Consequently, Fisher’s belief that he had developed a mathematical proof that fitness must always increase is also falsified.

That’s the “biological reality.” Fisher’s work is generally understood to mean that natural selection leads to increased fitness. While this is true taken by itself, mutation and other factors can and do reduce the average fitness of a population. According to Basener and Sanford, at real levels of mutation, Fisher’s original theorem, understood to be a mathematical proof that Darwinian evolution is inevitable, is overthrown.

Kudos to Basener and Sanford for making this important point. Now, will the textbooks and the online encyclopedia articles take note?

Sunday, 24 December 2017

On Intelligent design and designed intelligence.

Intelligent Design and Artificial Intelligence — The Connection
David Klinghoffer | @d_klinghoffer

It seems obvious on a moment’s reflection: intelligent design and artificial intelligence have something in common, and that is intelligence. What’s the significance of that? In an illuminating conversation for ID the Future, Robert Crowther talks about the connection with Dr. Robert Marks of Baylor University, co-author of the recent book  Introduction to Evolutionary Informatics.
Marks and his fellow researchers have shown that evolution isn’t computable, meaning it can’t be successfully modeled — “There exists no model successfully describing undirected Darwinian evolution,” as Marks puts it. And you know what? The qualities that make human intelligence special are similarly not computable.

That, as Professor Marks explains among other helpful observations, makes fantasies about AI robots taking over the world, developing consciousness, or displacing the human race incompatible with reality.  Listen to the podcast here.

The elixir of life?

The Wonder of Water at the Nanoscale
Evolution News @DiscoveryCSC

The last chapter of Michael Denton’s latest book, The Wonder of Water (which we’ll abbreviate WoW, and yes, the pun is intended), is the grand slam that runs home all the bases. After showing water’s incomparable role in climate, geology, and physiology, he looks deep into the cell and shows that water’s unique properties contribute to life at the nanoscale of molecular interactions. In “Water and the Cell” (especially pp. 168-177), Denton shows that H2O has been promoted from cellular stage hand to prima donna:

Water is fit for cellular physiology and biochemistry for many reasons: it has great solvation powers for all charged and polar compounds, its viscosity is right, and so forth. But water is far more than just a passive matrix. Water plays so many well-understood key roles in active processes, such as folding proteins, assembling cell membranes, and providing proton flows (especially proton flows in which water is clearly a key player if not the key player in bioenergetics), that it is already clear that it is indeed the active player in cellular physiology that Szent-Győrgi envisaged when he said, “Life is water, dancing to the tune of solids.”

This hyperbole is not meant to discount the essential role of coded information in the cell, as ID science reminds us; but as we shall see, water plays essential roles in the transfer of information in proteins and other biomolecules. Its ability to do this arises from the properties Denton discusses in chapter 7, such as viscosity, metastability and temperature range as a liquid, but it plays out in unexpected ways, some of them “well understood” but others at the frontier of biophysics.

A series of papers presented in December 2017 by the Proceedings of the National Academy of Sciences (PNAS) shows that much work needs to be done to understand water. In their introductory paper, “Chemical physics of water,” Pablo G. Debenedettia and Michael L. Klein begin with some Denton-esque WoW:

There is hardly any aspect of our lives that is not profoundly influenced by water. From climate to commerce and agriculture to health, water shapes our physical environment, regulates the major energy exchanges that determine climate on Earth, and is the matrix that supports the physical and chemical processes of life as we know it. The chemistry and physics of water, which underlie all of its uses, its necessity for life, its effects on other molecules and on the environment, are very active areas of research at the present time. So, why is this? Surprisingly, there are major gaps in knowledge and understanding that persist despite this substance’s ubiquity and central importance.

Among the ten papers that rise to the challenge is an opening perspective article by science journalist Philip Ball, “Water is an active matrix of life for cell and molecular biology.” Ball has a particular fascination with this topic, which apparently has grown over the last decade (Denton references his 2008 paper on the subject). Now he has additional data to confirm water’s “dance” with proteins as an active partner. This is best appreciated with some examples. First, he sets the stage. Notice his mention of information:

Water exhibits diverse structural and dynamical roles in molecular cell biology. It conditions and in fact partakes in the motions on which biomolecular interactions depend. It is the source of one of the key forces that dictate macromolecular conformations and associations, namely the hydrophobic attraction. It forms an extraordinary range of structures, most of them transient, that assist chemical and information-transfer processes in the cell. It acts as a reactive nucleophile and proton donor and acceptor, it mediates electrostatic interactions, and it undergoes fluctuations and abrupt phase-transition — like changes that serve biological functions. Is it not rather remarkable that a single and apparently rather simple molecular substance can accomplish all of these things? Looked at this way, there does seem to be something special about water.

Before proceeding to examples, we should look at water not as individual molecules, but a dynamic network. Because water is a polar molecule, its “electrostatic interactions” come from hydrogen bonds with itself and with other molecules. The dynamic network of water molecules adapts to the surfaces of cellular components, attracted to hydrophilic (water-loving) points and resisting hydrophobic (water-hating) points. In a related paper in the PNAS special feature, Xi et al. experimented with artificial surfaces and with mutated proteins to try to measure how shape and amino acid position affect hydrophobicity. By their own admission, this is a “major challenge” that their efforts only bluntly addressed. Nevertheless, we need to see the typical maps of proteins as incomplete without their “hydration haloes” attached.

Hydrophobicity Facilitates Protein Hydraulics

Many proteins are known to undergo “conformational changes” (jargon for “moving parts”) essential to their functions. We’ve seen that in classic molecular machines that are icons of intelligent design, like kinesin and ATP synthase. While many of these motions use the energy of ATP, increasingly biochemists are appreciating water’s role in these movements. This is not due to simple lubrication, but from electrostatic and hydrophobic reactions in the protein environment.

One example is protein folding: “The attraction of hydrophobes in water is well-attested and is one of the key driving forces for protein folding and the formation of functional multiprotein aggregates,” Ball says. This “driving force” from hydrophobicity can actually take on a more elegant and positive functional role, however. They hydration halo around a protein gives it a greater sphere of influence.

Hydration water molecules may adopt crystallographically well-defined positions around a macromolecule, and some of these have functional roles. One might say that the surfaces of the biomolecules are not sharply defined: their sphere of influence extends beyond the van der Waals surface into the solvent, and this coupling can make the hydration shell for all intents and purposes part of the biomolecule itself, imbued with some of the information that it encodes and therefore able to play a role in intramolecular rearrangements and intermolecular recognition processes.

Ball gives some recent examples. Some are quite fascinating. Water can facilitate large-scale movements of proteins, as in this example of hydraulic power:

In the hexameric multidomain protein glutamate dehydrogenase, the opening and closing of a hydrophobic pocket are accompanied by wetting and drying of the pocket, whereas binding and unbinding of water molecules in a hydrophilic crevice accompany changes in its length. These two changes in hydration are coupled, creating a kind of “hydraulic” mechanism for large-scale conformational change.

This raises an interesting question for ID research: Do genes take into account the action of water molecules as they construct molecular machines?

Water Transmits Electrical Power

Denton briefly mentioned “proton wires” made of water molecules that efficiently transmit protons without the need for conformational changes (pp 173-174). This form of energy transfer, because it affects protein function, amounts to information transfer as well. It comes directly out of water’s hydrogen-bonding ability. Ball gives the example of Complex I, an essential enzyme in respiration. “Water wires” in this molecular machine are part of its proton pumping function. “Here, a transient proton-conducting water channel is formed by the cooperative hydration of three antiporter-like subunits within the membrane domain of the complex.” Another example has the delicacy of a Debussy Nocturne:

Delicate marshaling of water molecules into positions that control the proton conductivity of a channel is also evident in cytochrome c oxidase, a transmembrane proton pump driven by oxygen reduction. Goyal et al. show how hydration seems to carefully tune and orchestrate this proton translocation. A glutamate residue is believed to act as a temporary proton donor, and its proton affinity is controlled by the degree of hydration in an internal hydrophobic cavity. That hydration, in turn, is governed by protonation of a substituent on the heme group 10 Å away, triggering movement of a loop that gates the cavity’s entrance.

Water Increases Active Site Functionality

The active site of a protein can be enhanced by its hydration halo. Here’s an example:

Water can help to fine-tune protein functionality in a variety of ways. It seems, for example, to enable the promiscuity of alkaline phosphatase enzymes in catalyzing the hydrolysis of a range of different phosphate and sulfate substrates. First principles simulations suggest that a part of the reason why this class of enzymes can support different types of transition state in the same active site is the differential placement of water molecules. Another form of “hydration tuning” is revealed in the chloride-pumping transmembrane retinal protein halorhodopsin of halophiles. Here, subtle rearrangements of waters and ions occur in the vicinity of the chromophore as chloride translocation progresses, inducing changes in chromophore bond lengths that affect its absorption spectrum.

One other example of the functional role of a hydration network is found in antifreeze proteins. We know that water crystals during freezing can disrupt cells, but water molecules working in concert with proteins can actually disrupt freezing:

The involvement of bound water can be more exotic. The fish antifreeze protein Maxi is a four-helix bundle with an interior, mostly hydrophobic channel filled with more than 400 water molecules, crystallographically ordered into a clathrate-like network of predominantly five-membered rings. It seems that this ordered network extends outward through the gaps between the helices to create an ordered layer of water molecules on the outer surface that enables Maxi to bind to ice crystals and hinder their growth, acting as a kind of “molecular Velcro” for ice binding.

Conclusions

Ball ends with the astrobiological question of whether life could arise on other planets without water. It’s hard to answer, he says, given water’s dynamic nature, and some have gone too far by ascribing mystical powers to “biological water,” to the point of mythology. But his gut feel is that it’s riskier to underestimate the importance of water:

However, if we should abandon notions of some special and well-defined phase called biological water, there does not seem to be any prospect of or virtue in returning water to its humble position of life’s canvas. It is a versatile, responsive medium that blurs the boundaries between mechanism and matrix. It surely is special; we might have to depend on either synthetic biology or observational astrobiology to tell us just how special.


Water, indeed, appears very special at the nanoscale. Denton, however, expands its specialness to the planetary scale and all scales in between, showing a multitude of essential roles for this amazing molecule that makes our planet not only habitable, but enjoyable. There’s nothing mystical about that when considered from a design perspective.

On Demons:The Watchtower Society's commentary.

DEMON

An invisible, wicked, spirit creature having superhuman powers. The common Greek word for demon (daiʹmon) occurs only once in the Christian Greek Scriptures, in Matthew 8:31; elsewhere the word dai·moʹni·on appears. Pneuʹma, the Greek word for “spirit,” at times is applied to wicked spirits, or demons. (Mt 8:16) It also occurs qualified by terms such as “wicked,” “unclean,” “speechless,” and “deaf.”—Lu 7:21; Mt 10:1; Mr 9:17, 25; see SPIRIT (Spirit Persons).

The demons as such were not created by God. The first to make himself one was Satan the Devil (see SATAN), who became the ruler of other angelic sons of God who also made themselves demons. (Mt 12:24, 26) In Noah’s day disobedient angels materialized, married women, fathered a hybrid generation known as Nephilim (see NEPHILIM), and then dematerialized when the Flood came. (Ge 6:1-4) However, upon returning to the spirit realm, they did not regain their lofty original position, for Jude 6 says: “The angels that did not keep their original position but forsook their own proper dwelling place he has reserved with eternal bonds under dense darkness for the judgment of the great day.” (1Pe 3:19, 20) So it is in this condition of dense spiritual darkness that they must now confine their operations. (2Pe 2:4) Though evidently restrained from materializing, they still have great power and influence over the minds and lives of men, even having the ability to enter into and possess humans and animals, and the facts show that they also use inanimate things such as houses, fetishes, and charms.—Mt 12:43-45; Lu 8:27-33; see DEMON POSSESSION.

The purpose of all such demonic activity is to turn people against Jehovah and the pure worship of God. Jehovah’s law, therefore, strictly forbade demonism in any form. (De 18:10-12) However, wayward Israel went so far astray as to sacrifice their sons and daughters to the demons. (Ps 106:37; De 32:17; 2Ch 11:15) When Jesus was on earth demon influence was very prevalent, and some of his greatest miracles consisted of expelling wicked spirits from victimized persons. (Mt 8:31, 32; 9:33, 34; Mr 1:39; 7:26-30; Lu 8:2; 13:32) Jesus gave this same power to his 12 apostles and to the 70 that he sent out, so that in the name of Jesus they too could cast out the demons.—Mt 10:8; Mr 3:14, 15; 6:13; Lu 9:1; 10:17.

Demon influence in human affairs is no less manifest today. It is still true that “the things which the nations sacrifice they sacrifice to demons.” (1Co 10:20) In the last book of the Bible, the “revelation by Jesus Christ, which God gave him, to show his slaves the things that must shortly take place,” prophetic warning is given concerning accelerated demon activity on the earth. (Re 1:1) “Down the great dragon was hurled, the original serpent, the one called Devil and Satan, who is misleading the entire inhabited earth; he was hurled down to the earth, and his angels [demons] were hurled down with him. On this account . . . woe for the earth and for the sea, because the Devil has come down to you, having great anger, knowing he has a short period of time.” (Re 12:9, 12) Unclean, froglike expressions “are, in fact, expressions inspired by demons and perform signs, and they go forth to the kings of the entire inhabited earth, to gather them together to the war of the great day of God the Almighty.”—Re 16:13, 14.

Christians must, therefore, put up a hard fight against these unseen wicked spirits. James, in arguing that belief alone is not sufficient, says: “You believe there is one God, do you? You are doing quite well. And yet the demons believe and shudder.” (Jas 2:19) “In later periods of time,” warned Paul, “some will fall away from the faith, paying attention to misleading inspired utterances and teachings of demons.” (1Ti 4:1) One cannot eat of Jehovah’s table and at the same time feed from the table of demons. (1Co 10:21) The faithful, therefore, must put up a hard fight against the Devil and his demons, “against the world rulers of this darkness, against the wicked spirit forces in the heavenly places.”—Eph 6:12.

To the Greeks to whom Paul preached, what were demons?

This use of the word “demon” is narrow and specific compared with the notions of ancient philosophers and the way the word was used in classical Greek. In this regard the Theological Dictionary of the New Testament, edited by G. Kittel (Vol. II, p. 8) remarks: “The meaning of the adj[ective dai·moʹni·os] brings out most clearly the distinctive features of the G[ree]k conception of demons, for it denotes that which lies outwith human capacity and is thus to be attributed to the intervention of higher powers, whether for good or evil. [To dai·moʹni·on] in pre-Christian writers can be used in the sense of the ‘divine.’” (Translated and edited by G. Bromiley, 1971) When speaking controversially with Paul, some Epicurean and Stoic philosophers concluded: “He seems to be a publisher of foreign deities [Gr., dai·mo·niʹon].”—Ac 17:18.

When speaking to the Athenians, Paul used a compound of the Greek word daiʹmon, saying: “You seem to be more given to the fear of the deities [Gr., dei·si·dai·mo·ne·steʹrous; Latin Vulgate, ‘more superstitious’] than others are.” (Ac 17:22) Commenting on this compound word, F. F. Bruce remarks: “The context must decide whether this word is used in its better or worse sense. It was, in fact, as vague as ‘religious’ in Eng[lish], and here we may best translate ‘very religious’. But AV ‘superstitious’ is not entirely wrong; to Paul their religion was mostly superstition, as it also was, though on other grounds, to the Epicureans.”—The Acts of the Apostles, 1970, p. 335.


When speaking to King Herod Agrippa II, Festus said that the Jews had certain disputes with Paul concerning their “worship of the deity [Gr., dei·si·dai·mo·niʹas; Latin Vulgate, ‘superstition’].” (Ac 25:19) It was noted by F. F. Bruce that this Greek word “might be less politely rendered ‘superstition’ (as in AV). The corresponding adjective appears with the same ambiguity in [Acts] 17:22.”—Commentary on the Book of the Acts, 1971, p. 483.

Stalinism redux? IV

New World Translation Remains Banned in Russia


Today, the Leningrad Regional Court denied our appeal of the Vyborg City Court’s August ruling that banned the Russian-language New World Translation of the Holy Scriptures (NWT), declaring it an extremist publication. Approximately 30 people attended the hearing, including representatives of the Britain, the Netherlands, the Switzerland, and the United States embassies.

Several times during the course of the hearing, the defense presented clear evidence demonstrating the bias and unqualified nature of the court-appointed expert study, which claimed the NWT is not a Bible, paving the way for it to be declared “extremist.”

The so-called experts were resolute in their claim that the NWT is not a Bible because it allegedly does not refer to itself as one. The defense, however, drew attention to page five of the 2007 Russian-language NWT, which clearly states: “This is a new translation of the Bible into Russian.” The defense criticized the so-called experts, who worked for 287 days on their review and yet missed this simple fact in the third paragraph of the NWT’s foreword.

When questioned, one of the court-appointed experts further defended her original claim by stating that the NWT cannot be considered a Bible unless it is marked "by the blessing of the patriarch" or matches word for word with such a translation. The “experts” also objected to the use of God’s personal name, Jehovah, and claimed that the text of the NWT does not support certain church dogma. The judge rejected any motion by the defense for a new unbiased expert study of the NWT.

After their appeal was rejected today, our brothers have no other remedy available within the Russian legal system and will submit the case to the European Court of Human Rights.

Jehovah’s Witnesses around the world are confident that no human institution can succeed in wiping out God’s Word and that present attempts to prevent its spread will ultimately fail.—Isaiah 40:8.

Saturday, 23 December 2017

Vox populi vox Dei?Pros and cons.

In search of high quality ignorance?

Unanswered Questions: New York Times Highlights the Benefits of Teaching "Ignorance" in Science
Sarah Chaffee September 4, 2015 12:11 PM 

Concerned that his students thought they now understood the brain after studying the course's 1400+ page textbook, Dr. Stuart Firestein, neuroscientist and chairman of the Department of Biological Sciences at Columbia University, wrote Ignorance: How it Drives Science. He was afraid his students might come away with the idea that science has all the answers. His book takes a more realistic view, describing scientific discovery as "feeling around in dark rooms, bumping into unidentifiable things, looking for barely perceptible phantoms."

In a recent op-ed in the New York Times, Jamie Holmes, author of the forthcoming book Nonsense: The Power of Not Knowing, shared Firestein's story to emphasize the role of ignorance in education. He explains that ignorance can catalyze curiosity and prompt questions in fields from science to business to education:

As [Firestein] argued in his 2012 book "Ignorance: How It Drives Science," many scientific facts simply aren't solid and immutable, but are instead destined to be vigorously challenged and revised by successive generations. ...
Presenting ignorance as less extensive than it is, knowledge as more solid and more stable, and discovery as neater also leads students to misunderstand the interplay between answers and questions.

People tend to think of not knowing as something to be wiped out or overcome, as if ignorance were simply the absence of knowledge. But answers don't merely resolve questions; they provoke new ones...

But giving due emphasis to unknowns, highlighting case studies that illustrate the fertile interplay between questions and answers, and exploring the psychology of ambiguity are essential. Educators should also devote time to the relationship between ignorance and creativity and the strategic manufacturing of uncertainty.

... Our students will be more curious -- and more intelligently so -- if, in addition to facts, they were equipped with theories of ignorance as well as theories of knowledge.

It's encouraging to find a discussion like this in what might seem an unlikely place. At Discovery Institute, we support critical analysis of ideas about evolution and the origin of life precisely because those are issues where many answers remain as yet unknown. Teaching students about issues where there are more questions than answers fosters high-level learning.

The science of the past two centuries has dramatically expanded our knowledge, from the inventions of computers and the Internet, to making open-heart surgery possible. But there are still many mysteries, and not just at the margins either. Teaching only about our positive scientific knowledge is not enough. Quality science education informs students about areas of certainty and about those where inquiry is ongoing.

Alluding to Thomas Kuhn, Holmes notes that acknowledging ignorance causes us to confront our preconceptions. In The Structure of Scientific Revolutions, Kuhn stated that when faced with an anomaly, a theory's defenders "will devise numerous articulations and ad hoc modifications of their theory in order to eliminate any apparent conflict." But eventually, given enough anomalies, the old theory will be replaced. Confronting unknowns is an essential part of scientific progress.

Chemical evolution -- the development of the first cell -- is clouded with mystery. 2007 Priestley Medalist George M. Whitesides wrote, "Most chemists believe, as do I, that life emerged spontaneously from mixtures of molecules in the prebiotic Earth. How? I have no idea." Similarly, leading molecular biologist Eugene Koonin has noted:

The origin of life is one of the hardest problems in all of science, but it is also one of the most important. Origin-of-life research has evolved into a lively, inter-disciplinary field, but other scientists often view it with skepticism and even derision. This attitude is understandable and, in a sense, perhaps justified, given the "dirty" rarely mentioned secret: Despite many interesting results to its credit, when judged by the straightforward criterion of reaching (or even approaching) the ultimate goal, the origin-of-life field is a failure -- we still do not have even a plausible coherent model, let alone a validated scenario, for the emergence of life on Earth. Certainly, this is due not to a lack of experimental and theoretical effort, but to the extraordinary intrinsic difficulty and complexity of the problem. A succession of exceedingly unlikely steps is essential for the origin of life, from the synthesis and accumulation of nucleotides to the origin of translation; through the multiplication of probabilities, these make the final outcome seem almost like a miracle.
Koonin acknowledges that some progress has been made, but falls back on the controversial multiverse theory to explain how life sprang into existence against all odds.

The enigma of biological origins offers an ideal opportunity for students to learn about a field of persistent scientific uncertainty, instead of simply being spoon-fed "facts." Our Science Education Policy states:

Instead of mandating intelligent design, Discovery Institute seeks to increase the coverage of evolution in textbooks. It believes that evolution should be fully and completely presented to students, and they should learn more about evolutionary theory, including its unresolved issues. In other words, evolution should be taught as a scientific theory that is open to critical scrutiny, not as a sacred dogma that can't be questioned.
Indeed, the sense of mystery has driven some of the very greatest scientists. Isaac Newton put it well when he said, "I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me."

Learning based on active engagement and critical thinking promotes understanding and excitement. As Holmes writes, "Questions don't give way to answers so much as the two proliferate together. Answers breed questions. Curiosity isn't merely a static disposition but rather a passion of the mind that is ceaselessly earned and nurtured."

Firestein now teaches a class on scientific ignorance. Hoping to acquaint his students with the world of inquiry, he invites scientists from diverse fields to come and lecture -- not about their discoveries, but about what they don't know. Our stance on academic freedom merely recognizes that his beneficial pedagogical philosophy should extend to the teaching of evolution, no less than any other area of study.

The alliance is dead?:Pros and cons.

Vladimir Putin has made Russia great again?:Pros and cons.

On the making of a bomb-thrower II

Why the designer is King of the mountain and not God of the gaps.

Stephen Meyer Debunks the “God of the Gaps” Objection
David Klinghoffer | @d_klinghoffer


 A vacuous yet often heard objection to intelligent design decries ID as a “God of the gaps” argument. Here, Stephen Meyer requires less than three minutes to show that the complaint rests on a failure to understand a basic feature of the theory of ID.
Yet the point comes up again and again. The group BioLogos, for one, in promoting theistic evolution, starts this way in answer to what they call a “common question,” “Are gaps in scientific knowledge evidence for God?”Are gaps in scientific knowledge evidence for God?


God-of-the-gaps arguments use gaps in scientific explanation as indicators, or even proof, of God’s action and therefore of God’s existence. Such arguments propose divine acts in place of natural, scientific causes for phenomena that science cannot yet explain. The assumption is that if science cannot explain how something happened, then God must be the explanation. But the danger of using a God-of-the-gaps argument for the action or existence of God is that it lacks the foresight of future scientific discoveries. With the continuing advancement of science, God-of-the-gaps explanations often get replaced by natural mechanisms. Therefore, when such arguments are used as apologetic tools, scientific research can unnecessarily be placed at odds with belief in God.1 The recent Intelligent Design (ID) movement highlights this problem. 
No, that is wrong, as Meyer explains. It’s a topic elaborated, among many others, in the vast yet sprightly new volume,  Theistic Evolution: A Scientific, Philosophical, and Theological Critique,of which Dr. Meyer served as one of the editors.

May I add a further observation? If you don’t mind, ID is not an “apologetic tool,” as BioLogos puts it. It’s not a “tool” at all, except in the scientific sense, as a heuristic, a methodology for getting at the truth about life and about nature. It’s not a “tool” to use on people, to keep the kids from getting rowdy and disbelieving what their teacher says in class. This misunderstanding is common to theistic and atheistic evolutionists: they think too much in terms of winning recruits or keeping troops in line, rather than finding out what’s true, wherever the quest may take you.

Please, refrain from using a “tool” on me. I don’t care for that approach in a religious or a scientific context, and I can’t help but think many others, of whatever spiritual community or of none, must likewise find it patronizing.

It’s the difference between being treated as a child, or treated as an adult. That may be ID’s greatest strength — it speaks to us as adults — and one of the biggest turnoffs of theistic evolution.

On Darwinism and the magic kingdom.

Listen: Walt Disney on Evolution…Yes, Evolution
David Klinghoffer | @d_klinghoffer

For many of us, encounters with the imagination of Walt Disney were key influences in childhood, and beyond. When I was growing up in Southern California, visits to Disneyland were a touchstone ritual for me — and so too for our colleague John West, the CSC’s associate director. In a fascinating and frankly, thoroughly charming new episode of ID the Future, he recalls a first visit at the age of five, and an ongoing fascination thereafter.


Part of the interest here lies in the less-than-obvious question of how Disney shaped young people’s understanding of evolution. Evolution? Yes, that surprised me too when I first heard it. But consider.Dr. West examines theme park and World’s Fair attractions and, above all, the enigmatic Fantasia with its iconic sequence “Rite of the Spring.” The author of Walt Disney and Live Action,among his other books, West shows that Walt Disney had not only a long curiosity about evolution but some startlingly subtle, even contemporary points to make about it. You’ll enjoy listening 

Sunday, 17 December 2017

Using design to disprove design?

The Origin of Life: Dangers of Taking Research Claims at Face Value
Brian Miller

In  an article here yesterday, I wrote about philosopher Vincent Torley’s critique of my posts related to the origin of life, and I corrected his errors on thermodynamics. Today, I will correct the errors related to the state of origins research. As a general overview, origin-of-life research falls into two categories. The first is experiments that attempt to accurately model the conditions on the early Earth. The classic example is the Stanley Miller experiment which started with a combination of reducing gases (methane, ammonia, and hydrogen) that were believed to exist, and the researchers applied to the mixture electrical discharges. The resulting reactions produced several amino acids, heralded as a major breakthrough.

Unfortunately, scientists later recognized that the early atmosphere was not likely so reducing Instead, it contained a different combination of gases including carbon dioxide. All subsequent experiments conducted with more realistic starting ingredients failed to produce the building blocks of life (amino acids, carbohydrates, nucleotides, and lipids) in significant quantities. An additional challenge for all such experiments, including Miller’s, was that they produced other byproducts that would have caused deleterious cross reactions. Such conditions would have prevented any subsequent stages leading to life. All roads led to dead ends.

The consistent failure of realistic experiments led to a second class which do not attempt to model actual conditions on the early Earth. Instead, they follow what is termed prebiotic synthesis. Origins expert Robert Shapiro outlined the typical process used for RNA in his  analysis of origin-of-life research. Such experiments involve a long series of highly orchestrated steps which include purifying desired products, removing unwanted byproducts, changing physical and chemical conditions, adding unrealistically high concentrations of assisting substances, and other interventions to ensure that the target molecules are achieved.

Attempting to relate such research to actual events on the early Earth leads to an almost comical series of dozens of highly improbable events. Various proposed origins scenarios over the years have involved meteorite showersvolcanos, poisonous gas, and other phenomena coupled to the precise transportation of lucky molecules through a series of multiple subsequent environments while always passing through the perfect intermediate conditions. Torley actually describes just such a fanciful scenario proposed by Sutherland. As an amusing side note, a friend reviewed origins research, and she was not sure if she was reading about scientific theories or the synopsis of the next Michael Bay natural disaster movie. Ironically, such synthesis experiments actually bolster the design argument by demonstrating that the origin of the building blocks of life and their subsequence assembly require substantial intelligent direction.

My previous article described how two of the major obstacles to the origin of life are overcoming the free energy barriers and producing the fantastically improbable configurations of atoms associated with life. The synthetic experiments bypass these challenges through intelligent intervention. As an illustration, the origin of complex molecules such as RNA and lipids must start with high free-energy solutions of reactants. However, the abundance of such sets of molecules under natural conditions drops exponentially with their free energy. Researchers overcome this challenge by starting with highly concentrated solutions of the ideal combination of pure chemicals. Highly concentrating the chemicals artificially  increases their effective free energies, so reactions are driven in the desired direction.

In reality, many of the proposed starting molecules for origins theories would have quickly reacted on the early Earth with other molecules in the environment preventing substantial buildup (See The Mystery of Life’s Origins, Ch. 4). This challenge also holds true for the origination of any  autocatalytic system of reactions which is another essential component for life’s origins. The dilemma is similar to that of an entrepreneur who wishes to start a business to generate a profit, but starting it requires a million dollars for an initial investment. Unfortunately, the entrepreneur is destitute and has no credit for borrowing the needed capital. As a result, he has no way to even take the first step.

The configurational challenge relates to the fact that vast numbers of chemical reactions could take place on the early Earth. However, life’s origin requires that only specific ones proceed while other far more likely ones are blocked. This hurdle relates both to the origin of the building blocks and of cellular metabolism. In addition, in large molecules the atoms can take on numerous configurations, and the right ones are exceptionally unlikely. Shapiro described how the atoms in RNA could form hundreds of thousands to millions of other stable organic molecules. Researchers overcome this challenge by forcing the atoms to achieve the desired arrangements through tightly controlling the reaction steps. Such constraining of outcomes parallels  role of information in constraining messages in information theory. And, the relationship between information and precise causal control in biology was made explicit in the talk by Paul Griffiths at the Royal Society meeting on New Trends in Evolutionary Biology.

To summarize, researchers have shown how the origin of life might proceed through intelligent design, not blind processes.Shapiro illustrates this point beautifully in analyzing the experiments of John Sutherland, but his comments relate to all such experiments.

Reviewing Sutherland’s proposed route, Shapiro noted that it resembled a golfer, having played an 18 hole course, claiming that he had shown that the golf ball could have, through some combination of wind, rain, heating, cooling, dehydration, and ultraviolet irradiation played itself around the course without the golfer’s presence.

In Torley’s article he references several prebiotic synthesis experiments, but he fails to appreciate their irrelevance to the origins problem for the reasons outlined above. For instance, he describes how Sutherland and other researchers used ultraviolet light to help promote reactions leading the life. What Torley missed was that these experiments used a very specific wavelength of light (e.g., 240 nanometers at the ideal intensity for the optimal amount of time to drive the desired reactions. If the experiments had used light mimicking that from the sun hitting the early Earth, they would have failed since other wavelengths would have destroyed the target moleculesThe difference between the use of light in the experiments and the actual sun parallels the difference between the fire from a blowtorch used by a skilled craftsman and an open fire burning down a building.

Torley also describes how different researchers were able to drive key reactions even when they contained contaminants. For instance, Sutherland included a phosphate at the beginning of his experiments designed to create nucleotides. Similarly,  Jack Szostak’s group created vesicles (containers) out of two fatty acids which could house an RNA enzyme (ribozyme), and he added Mg2+ which under other conditions would have prevented vesicles from forming. However, the relevance of these experiments was greatly exaggerated.

The use of such terms as “contaminant” and “messy” is highly misleading. Phosphate is an essential component of the target nucleotide molecules, and Mg2+ was essential for activating the ribozymes. They were able to include these molecules because the experiments were meticulously designed to ensure they would produce the desired outcomes. If molecules were added which would have been abundant on the early Earth (true contaminants), the experiments would have failed. As an analogy, the researchers resemble car owners boasting about how their car engines could function even in the presence of such “contaminants” as gasoline and motor oil. However, if sand and glue were added, the engines would have fared far less well.

Torley mentions one additional class of studies which use simulations to attempt to address origin-of-life challenges. Specifically, he references Nigel Goldenfeld’s research to solve the homochirality problem  — many building blocks of life can come in either a right-handed or a left-handed form, but life requires only one handedness (homochiral). The results from simulation experiments are generally treated with great caution since they can be designed to model any imaginable conditions and to proceed according to any desired rules.

As a case in point, Goldenfeld’s study is based on an  abstract mathematical model and numerical simulations that center on an achiral (mirror image is the same as itself) molecule interacting with the right and left-handed versions (enantiomers) of a chiral molecule to yield another copy of the latter. For instance, the “autocatalytic” reaction could start with one left-handed amino acid and end with two left-handed amino acids. The simulation set the dynamics of the reactions to eventually lead to a pure mixture of one enantiomer.


The main challenge with these results is that the underlying model is completely unrealistic. No chiral building block of life (e.g. right-handed ribose) has been shown to interact with any substance to self-replicate. On the contrary, in all realistic environments mixtures with a bias of one enantiomer tend toward mixtures of equal percentages of both left-handed and right-handed versions. Goldenfeld “solved” the homochirality problem by creating an artificial world that eliminated all real-world obstacles. All simulations that purport to be breakthroughs in origins problems follow this same pattern. Conditions are created that remove the numerous practical challenges, and the underlying models are biased toward achieving the desired results.

The skilled trades;still the smart choice. III