Search This Blog

Sunday 4 December 2016

How the science industrial complex sucks up all the oxygen in the room.

"Under the Banyan Tree Nothing Grows"
David Klinghoffer 

James Le Fanu uses that image, a South Indian proverb, to describe the way Big Science devours billions of dollars a year while the productions of this vast government industry seem startlingly and increasingly barren of significance. We've left behind the era of great discoveries of the century past -- permanently, so it seems -- and now find ourselves awash in outpourings of published research that add up, says Le Fanu, to "surprisingly little." Don't believe him? Just follow the website Science Daily for a week.

Dr. Le Fanu, a peer-reviewed medical scientist, medical historian, and one of my favorite science writers, flew into Seattle from London last week and my wife and I had the pleasure of going to lunch with him. A most charming guy who, with the waitress at the sushi restaurant we took him to on Lake Union, suavely passed over the fact that the crab custard he ordered did not go down well at all.

He pointed out to me that much of the thesis of his wonderful book Why Us? -- which I reviewed here in Parts I, II, III, IV, and V -- is crystallized in an essay he wrote for the British magazine Prospect last year that I somehow missed. I pass the latter along to you now. Of the "banyan tree of Big Science," he writes that it

...threatens to extinguish the true spirit of intellectual inquiry. Its mega projects organized on quasi-industrial lines may be guaranteed to produce results, but they are inimical to fostering those traits that characterize the truly creative scientist: independence of judgment, stubbornness and discontent with prevailing theory. Big Science is intrinsically conservative in its outlook, committed to "more of the same," the results of which are then interpreted to fit in with the prevailing understanding of how things are. Its leading players who dominate the grant-giving bodies will hardly allocate funds to those who might challenge the certainties on which their reputations rest.
He argues that the evident "diminishing returns" we're seeing from super-funded science have something to do with a brick wall that materialism has run up against without acknowledging that it has done so. Thrilling technological breakthrough of the 1980s promised to get to the heart of the mystery of life -- how it constitutes itself from the genetic code, how it flowers in human consciousness.
Scientists expected that, respectively, mapping the genome and scanning the brain as it works would lay bare these enigmas. The expectations were cruelly disappointed, however.

The genome projects were predicated on the reasonable assumption that spelling out the full sequence of genes would reveal the distinctive genetic instructions that determine the diverse forms of life. Biologists were thus understandably disconcerted to discover that precisely the reverse is the case. Contrary to all expectations, there is a near equivalence of 20,000 genes across the vast spectrum of organismic complexity, from a millimeter-long worm to ourselves. It was no less disconcerting to learn that the human genome is virtually interchangeable with that of both the mouse and our primate cousins, while the same regulatory genes that cause, for example, a fly to be a fly, cause humans to be human. There is in short nothing in the genomes of fly and man to explain why the fly has six legs, a pair of wings and a dot-sized brain and that we should have two arms, two legs and a mind capable of comprehending the history of our universe.
The genetic instructions must be there -- for otherwise the diverse forms of life would not replicate their kind with such fidelity. But we have moved in the very recent past from supposing we might know the principles of genetic inheritance to recognizing we have no conception of what they might be.

It has been a similar story for neuroscientists with their sophisticated scans of the brain "in action." Right from the beginning, it was clear that the brain must work in ways radically different from those supposed. Thus the simplest of tasks, such as associating the noun "chair" with the verb "sit" cause vast tracts of the brain to "light up" -- prompting a sense of bafflement at what the most mundane conversation must entail. Again the sights and sounds of every transient moment, it emerged, are fragmented into a myriad of separate components without the slightest hint of the integrating mechanism that would create the personal experience of living at the centre of a coherent, unified, ever-changing world...

Meanwhile the great conundrum remains unresolved: how the electrical activity of billions of neurons in the brain translate into the experiences of our everyday lives -- where each fleeting moment has its own distinct, intangible feel: where the cadences of a Bach cantata are so utterly different from the taste of bourbon or the lingering memory of that first kiss.

Taking a somewhat David Berlinski-esque stance, Le Fanu is not an intelligent-design advocate. Instead, he honestly confronts readers with the secret, continuously hushed up because so much money is at stake, that materialist science has failed in gaining access to its own imagined holy of holies.
Or come to think of it, strike and amend that metaphor. It makes me think of the story of how the Roman general Pompey in 63 BCE led his invaders into the Jerusalem Temple. He expected to enter the actual Holy of Holies and find there some material representation of the Deity. After all, those are the terms in which Pompey, like most scientists today, reflexively thought. Fighting his way past the priests, butchering them as he went, the Roman stepped into the most sacred room and was shocked by what he found.

I imagine his dismay and perhaps, too, his secret fear. For the small room was utterly empty.

Read  the rest  of Le Fanu for yourself.

Alas for OOL speculation there was no soup.

Archaean Microfossils and the Implications for Intelligent Design
Casey Luskin

News came this week reporting the discovery of ~3.4 billion year old microfossils from Archaean rocks in Western Australia. As Nature  suggests, they "could be the oldest microbial fossils yet documented," further quoting a paleobiologist who states:

The authors have demonstrated as robustly as possible, given current techniques and the type of preservation, the biological origin of these microstructures.
Always thinking from a materialistic perspective, the New York Times notes that these microfossils imply life arose "surprisingly soon" after the existence of life became even a possibility:
Their assertion, if sustained, confirms the view that life evolved on earth surprisingly soon after the Late Heavy Bombardment, a reign of destruction in which waves of asteroids slammed into the primitive planet, heating the surface to molten rock and boiling the oceans into an incandescent mist. The bombardment, which ended around 3.85 billion years ago, would have sterilized the earth's surface of any incipient life.
new paper in Nature Geoscience officially reports the discovery:
Here we report the presence of microstructures from the 3.4-billion-year-old Strelley Pool Formation in Western Australia that are associated with micrometre-sized pyrite crystals. The microstructures we identify exhibit indicators of biological affinity, including hollowcell lumens, carbonaceous cell walls enriched in nitrogen, taphonomic degradation, organization into chains and clusters, and ?13C values.1
Claims of evidence of ancient life from rocks in Western Australia from about this time period are nothing new. In 1980, two papers in Nature reported 3.4 to 3.5 billion-year-old stromatolite fossils from the Warrawoona Group in Western Australia.2 Specific microfossils could not be seen but these overall structures appeared to be similar to bacterial mats known from the present day. Then, in 1987 and 1993, UCLA paleobiologist J. William Schopf published papers in Science reporting actual microfossils from the same group.3 There are even reports of geochemical signatures of life in rocks dating all the way back to 3.8 billion years ago.4
Schopf's findings were later criticized by an Oxford paleobiologist named Martin D. Brasier.5 In a twist of irony, Brasier is a co-author of this new paper which claims to have found microfossils from a different locality of about the same age.

According to an earth scientist  quoted by the New York Times, "Schopf still very strongly defends his original claim." Thus, it seems that whether we're talking about proponents of Schopf's microfossils, or critics, leading scientists on all sides of this question believe that full-fledged cellular life existed on earth by 3.4 billion years ago.

Some of the morphology of these newly discovered microfossils can be seen in the picture above.*

Time Isn't On Their Side
What are the implications of these findings for the debate about intelligent design? Materialists often suggest that blind and unguided chemical reactions -- cheered on by electricity, heat, other forms of energy, and vast eons of time -- spontaneously formed a self-replicating molecule which then evolved through unguided processes into life as we know it. Origin of life theorist George Wald captured the spirit of this perspective in a paper written in 1955:

Given so much time, the "impossible" becomes possible, the possible probable, and the probable virtually certain. One only has to wait: Time itself performs the miracles.6
As we've seen, life could not have existed on earth when the earth first formed because the early earth was a hostile place as a result of impacts during the heavy bombardment period. Thus, Stephen Jay Gould explains that, contrary to Wald, the amount of time available for the origin of life is not vast and unending, but extremely limited:
Since the oldest dated rocks, the Isua Supracrustals of West Greenland, are 3.8 billion years old, we are left with very little time between the development of suitable conditions for life on the earth's surface and the origin of life.7
Likewise origin-of-life theorist Cyril Ponnamperuma stated "we are now thinking, in geochemical terms, of instant life..."8.
The new reports of early microfossils from the Archaean provide more evidence confirming that life existed very soon after the earth became hospitable to life. As Brasier  was recently quoted here as: "This goes some way to resolving the controversy over the existence of life forms very early in Earth's history. The exciting thing is that it makes one optimistic about looking at early life once again." (emphasis added)

This dramatically limits the amount of time, and thus the probablistic resources, available to those who wish to invoke purely unguided and purposeless material processes to explain the origin of life.

But if many billions of years were available for the origin of life on earth, even that would be insufficient time for life to form on earth via blind material causes. To further understand why there are insufficient probablistic resources to explain many key steps in the origin of life -- particularly in forming the first self-replicating molecules -- see some of these recent articles here on ENV:

The Origin of Life: An RNA World?, by Jonathan M. 
New Scientist Weighs in on the Origin of Life  , by Jonathan M. 
Presto! The Origin of Life in Four Surprisingly Easy Steps, by Casey Luskin
Probably the most comprehensive treatment of why there are insufficient probablistic resources to explain the natural unguided chemcial origin of life is Stephen C. Meyer's book  Signature in the Cell.
As Meyer explains, intelligence is the one known cause that can rapidly generate the kind of high levels of complex and specified information that we observe in life. ID can easily accommodate evidence of rapid appearance of life on earth, whereas this new microfossil evidence pushes materialist explanations even further beyond the available probabilistic resources.

While the NY Times says these microfossils show life existed "surprisingly soon" after the earth became hospitable, ID proponents aren't surprised by evidence for early life. Materialists are surprised because they expected much more time would be needed for the origin of life to take place.





References Cited:
[1.] D. Wacey, M. R. Kilburn, M. Saunders, J. Cliff and M. D. Brasier, "Microfossils of sulphur-metabolizing cells in 3.4-billion-year-old rocks ofWestern Australia," Nature Geoscience, DOI: 10.1038/NGEO1238 (2011).

[2.] See D. R. Lowe, "Stromatolites 3,400-Myr old from the Archean of Western Australia," Nature, Vol. 284:441-443 (April 3, 1980); M.R. Walter, R. Buick, J.S.R. Dunlop, "Stromatolites 3,400-3,500 Myr old from the North Pole area, Western Australia," Nature, Vol. 284:443-445 (April 3, 1980). See also H. J. Hofmann, K. Grey, A. H. Hickman and R. I. Thorpe, "Origin of 3.45 Ga coniform stromatolites in Warrawoona Group, Western Australia," Geological Society of America Bulletin, Vol. 111:1256-1262 (August, 1999).

[3.] See J. W. Schopf and B. M. Packer, "Early Archean (3.3-Billion to 3.5-Billion-Year-Old) Microfossils from Warrawoona Group, Australia," Science, Vol. 237: 70-73 (July 3, 1987); J. W. Schopf, "Microfossils of the Early Archean Apex Chert: New Evidence of the Antiquity of Life," Science, Vol. 260: 640-646 (April 30, 1993).

[4.] See for example S. J. Mojzsis, G. Arrhenlus, K. D. McKeegan, T. M. Harrisont, A. P. Nutman, and C. R. L Friend, "Evidence for life on Earth before 3,800 million years ago," Nature, Vol. 384:55-59 (November 7, 1996). For a critical view, see: M. A. van Zuilen, A. Lepland, & G. Arrhenius, "Reassessing the evidence for the earliest traces of life," Nature, Vol. 418:627-630 (August 8, 2002).

[5.] See M.D. Brasier, O.R. Green, A.P. Jephcoat, A.K. Kleppe, M.J. Van Kranendonk, J.F. Lindsay, A. Steele, & N.V. Grassineau, "Questioning the evidence for Earth's oldest fossils," Nature Vol. 416:76-81 (2002).

[6.] G. Wald, "The Origin of Life," Scientific American (August 1954).

[7.] S. J. Gould, "An Early Start," Natural History, p. 10 (February, 1978) (emphasis added).


[8.] C. Ponnamperuma, quoted in F. Hoyle and C. Wickramasinghe, Evolution from Space (1981).

When they say evolution...

The Eight Meanings of "Evolution"

David Klinghoffer


I mentioned yesterday that Darwinists have a frustrating way, in public discourse, of failing to say what they mean by evolution. Ann Coulter observes how this can function as a method of intimidation:

Just a year later, at a 2008 Republican presidential candidates' debate, Matthews asked for a show of hands of who believed in evolution. No discussion permitted! That might allow scientific facts, rather than schoolyard taunts, to escape into the world.
Evolution is the only subject that is discussed exclusively as a "Do you believe?" question with yes-or-no answers.

In  God and Evolution,  Discovery Institute's Jay Richards gives no fewer than eight meanings of the word. Commit these to memory:
Though God is the grandest and most difficult of all subjects, the meaning of the word "evolution" is actually a lot harder to nail down.
In an illuminating article called "The Meanings of Evolution,"  Stephen Meyer and Michael Keas distinguished six different ways in which "evolution" is commonly used:

1. Change over time; history of nature; any sequence of events in nature.
2. Changes in the frequencies of alleles in the gene pool of a population.

3. Limited common descent: the idea that particular groups of organisms have descended from a common ancestor.

4. The mechanisms responsible for the change required to produce limited descent with modification, chiefly natural selection acting on random variations or mutations.

5. Universal common descent: the idea that all organisms have descended from a single common ancestor.

6. "Blind watchmaker" thesis: the idea that all organisms have descended from common ancestors solely through unguided, unintelligent, purposeless, material processes such as natural selection acting on random variations or mutations; that the mechanisms of natural selection, random variation and mutation, and perhaps other similarly naturalistic mechanisms, are completely sufficient to account for the appearance of design in living organisms.

Meyer and Keas provide many valuable insights in their article, but here we're only concerned with "evolution" insofar as it's relevant to theology.
The first meaning is uncontroversial -- even trivial. The most convinced young earth creationist agrees that things change over time -- that the universe has a history. Populations of animals wax and wane depending on changes in climate and the environment. At one time, certain flora and fauna prosper on the earth, but they later disappear, leaving mere impressions in the rocks to mark their existence for future generations.

Of course, "change over time" isn't limited to biology. There's also cosmic "evolution," the idea that the early universe started in a hot, dense state, and over billions of years, cooled off and spread out, formed stars, galaxies, planets, and so forth. This includes the idea of cosmic nucleosynthesis, which seeks to explain the production of heavy elements (everything heavier than helium) in the universe through a process of star birth, growth, and death. These events involve change over time, but they have to do with the history of the inanimate physical universe rather than with the history of life. While this picture of cosmic evolution may contradict young earth creationism, it does not otherwise pose a theological problem. The generic idea that one form of matter gives rise, under the influence of various natural laws and processes, to other forms of matter, does not contradict theism. Surely God could directly guide such a process in innumerable ways, could set up a series of secondary natural processes that could do the job, or could do some combination of both.

In fact, virtually no one denies the truth of "evolution" in senses 1, 2, or 3. And, pretty much everyone agrees that natural selection and random mutations explain some things in biology (number 4).

What about the fifth sense of evolution, universal common ancestry? This is the claim that all organisms on earth are descended from a single common ancestor that lived sometime in the distant past. Universal common ancestry is distinct from the mechanism of change. In fact, it's compatible with all sorts of different mechanisms or sources for change, though the most popular mechanism is the broadly Darwinian one. It's hard to square universal common descent with some interpretations of biblical texts of course; nevertheless, it's logically compatible with theism. If God could turn dirt into a man, or a man's rib into a woman, then presumably he could, if he so chose, turn a bacterium into a jellyfish, or a dinosaur into a bird. Whatever its exegetical problems, an unbroken evolutionary tree of life guided and intended by God, in which every organism descends from some original organism, sounds like a logical possibility. (So there's logical space where both intelligent design and theistic evolution overlap -- even if ID and theistic evolution often describe people with different positions.)

Besides the six senses mentioned by Meyer and Keas, there is also the metaphorical sense of evolution, in which Darwinian Theory is used as a template to explain things other than nature, like the rise and fall of civilizations or sports careers. In his book The Ascent of Money, for instance, historian Niall Ferguson explains the evolution of the financial system in the West in Darwinian terms. He speaks of "mass extinction events," survival of the fittest banks, a "Cambrian Explosion" of new financial instruments, and so forth. This way of speaking can sometimes be illuminating, even if, at times, it's a stretch. Still, no one doubts that there are examples of the fittest surviving in biology and finance. We might have some sort of "evolution" here, but not in a theologically significant sense.

Finally, there's evolution in the sense of "progress" or "growth." Natural evolution has often been understood in this way, so that cosmic history is interpreted as a movement toward greater perfection, complexity, mind, or spirit. A pre-Darwinian understanding of "evolution" was the idea of a slow unfolding of something that existed in nascent form from the beginning, like an acorn eventually becoming a great oak tree. If anything, this sense of evolution tends toward theism rather than away from it, since it suggests a purposive plan. For that reason, many contemporary evolutionists (such as the late Stephen J. Gould) explicitly reject the idea that evolution is progressive, and argue instead that cosmic history is not going anywhere in particular.

Lamarck's rehabilitation or t'was all big misunderstanding.

Michael Skinner on Epigenetics: Stage Three Alert

Cornelius Hunter 


On the topic of epigenetics, which I've written about extensively for many years, I was interested to read Washington State University biologist  Michael Skinner's recent article for Aeon
. Skinner's piece reminds us of the old maxim that truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as self-evident. If we can slightly modify these three stages as follows, then we have the history of how evolution has struggled oppose the scientific findings we now refer to as epigenetics:

Reject and persecute

Delegitimize and minimize

Rename and incorporate

Skinner's position represents the move, which has been taking place in recent years, into Stage Three (for more, see here)
 ).

Skinner's Aeon article provides an excellent rundown of findings, both old and new, that confirm and elucidate what evolutionists have aggressively and violently opposed for a century: that epigenetics is not only real, but significant in causing long-term biological change. Natural selection plays no role in this process.

From 18th-century observations of plants adapting to hotter temperatures, to Conrad Waddington's fruit fly experiments in the 1950s (for more tidbits see here)
, to more recent observations of a range of species, Skinner provides an accessible summary and draws the inescapable conclusion:

Much as Lamarck suggested, changes in the environment literally alter our biology. And even in the absence of continued exposure, the altered biology, expressed as traits or in the form of disease, is transmitted from one generation to the next.
Much as Lamarck suggested? That is an astonishing admission given how evolutionists have, in the past century, vilified Lamarck and anyone who would dare associate with his ideas. To this day such resistance continues, but it is waning. Hence evolutionists such as Skinner can broach the truth.

Skinner also comes clean on the problem that evolution's basic source of biological variation, DNA mutations, is insufficient:

[T]he rate of random DNA sequence mutation turns out to be too slow to explain many of the changes observed. Scientists, well-aware of the issue, have proposed a variety of genetic mechanisms to compensate: genetic drift, in which small groups of individuals undergo dramatic genetic change; or epistasis, in which one set of genes suppress another, to name just two. Yet even with such mechanisms in play, genetic mutation rates for complex organisms such as humans are dramatically lower than the frequency of change for a host of traits, from adjustments in metabolism to resistance to disease.
Mutations are too slow for evolution? Again, this is an astonishing admission. The last time
 mathematicians reported this inconvenient truth they were told by evolutionists that it didn't matter because, after all, we all know that evolution is true. Nothing like contradicting the science. Skinner admits that a paradigm shift is needed.

Unfortunately for Skinner and his readers that is where the light ends and smoke begins. As an evolutionist, Skinner must present this contradictory biology as, somehow, consistent with evolution. The first sign that Skinner will firmly plant himself in Stage Three (Rename and incorporate) comes in his opening sentence:

The unifying theme for much of modern biology is based on Charles Darwin's theory of evolution, the process of natural selection by which nature selects the fittest, best-adapted organisms to reproduce, multiply and survive.
Evolution is the unifying theme for much of modern biology? This not so secret handshake is so over-the-top that it hardly seems worthwhile to dignify it with a rebuttal. Given how evolutionists are  consistently surprised
 by biology, one would hope they at least could stop with this particular untruth. But there it is.

Unfortunately it doesn't stop there. Skinner's thesis in his article is that the long rejected epigenetics will now fit conveniently into evolutionary theory. It was all a big misunderstanding and rather than rejecting epigenetics, we should see it as merely another component in the increasingly complex theory called evolution.

This is Stage Three: Rename, recast, retool, reimagine, and incorporate into our modern-day Epicureanism.

With enough massaging and storytelling, evolutionists forget the contradictions and convince themselves, along with their audiences, that the fit is perfect and epigenetics is, in fact, yet more proof of evolution.

There's only one problem. This is all absurd.

What Skinner and the evolutionists don't tell you is that in light of their theory, none of this makes sense. With epigenetics the biological variation evolution needs is not natural. It is not the mere consequence of biophysics -- radiation, toxins, or other mishaps causing DNA mutations. Rather, it is a biological control system.

It is not simple mistakes, but complex mechanisms. It is not random, but directed. It is not slow, but rapid. It is not a single mutation that is selected, but simultaneous changes across the population. This is not evolution.

As Skinner inconveniently realizes, such epigenetics are found across a wide range of species. They are widely conserved and, for evolution, this is yet more bad news. It means the incredible epigenetics mechanisms must have, somehow, arisen very early in the history of evolution.

What the evolutionists don't admit is that epigenetics contradicts evolutionary theory. Not only must such incredibly complex mechanisms have evolved early on, and not only must they have arisen from chance mutation events, and so not only must evolution have created evolution, but they would have persisted in spite of any fitness advantage.

The whole notion of evolution is that natural selection saves the day by directing the blind, chance mutations. Setting aside the silliness of this idea, the problem with epigenetic mechanisms is that if they were to arise from chance (and "oh what a big if"), then mutations would not increase the organism's fitness.

Epigenetic mechanisms are helpful at some future, unknown, time when the environmental challenge finally presents itself. They are useless when they initially arise, and so would not be preserved by evolution's mythical natural selection.

Of course evolutionists will contrive yet more complex just-so stories to explain how epigenetics mechanisms arose from pre-existing parts used for other purposes (the ridiculous co-adaptation argument), and about how they just happened to provide some other functions so as to improve fitness.

Skinner's presentation of how to integrate epigenetics with evolution is entirely gratuitous. He has empirical evidence for the former, and dogma for the latter. There is no scientific need for the addition of evolution -- it is a multiplied entity and is gratuitous. Yet Skinner needs it.

These are all the usual tales, which will be trotted out as yet more "facts." Evolutionists must tell these stories. Otherwise they would have to move beyond Stage Three, and admit the science contradicts their theory. And that is not going to happen.

Saturday 3 December 2016

Censoring compassion?

France Censors Down Syndrome Ad Over Abortion
Wesley J. Smith





Conscience is a good thing. It is the path to repentance, forgiveness, and healing. Take Project Rachel, the compassionate pro-life project that aids women overcome the grief and guilt some experience from having had an abortion.

But France doesn't want women to feel badly for having aborted a Down syndrome baby. Accordingly, it censored an advertisement to air that shows the positive side of parenting a child with Down. From the Wall Street Journal:

Abortion is legal in most of Europe, but its proponents are bent on suppressing efforts to change the minds of mothers considering it.

Witness France's ban on a television commercial showing happy children with Down Syndrome (DS). Produced to commemorate World Down Syndrome Day, the commercial showed several cheerful children with DS addressing a mother considering abortion. "Dear future mom," says one, "don't be afraid." "Your child will be able to do many things," says another. "He'll be able to hug you." "He'll be able to run toward you." "He'll be able to speak and tell you he loves you."

France's High Audiovisual Council removed the commercial from air earlier this year, and in November the Council of State, the country's highest administrative court, upheld the ban, since the clip could "disturb the conscience" of French women who had aborted DS fetuses.

So much for free speech. Worse, France is saying saving the lives of these future children is less important than protecting the feelings of those who aborted their babies.


More broadly, it reflects a rampant view that aborting Down babies is the preferred course. Indeed, this censoring is merely a small part of an effort, easily discernible, to see people with Down disappeared from the face of the earth via eugenic abortion.

On explaining away cosmic fine tuning.

Dr. Strange Introduces the Multiverse to the Masses
Jonathan Witt 

This month's blockbuster Marvel comic book movie Dr. Strange will serve as many people's introduction to the exotic idea of the multiverse, the notion that besides our universe there are a host -- maybe an infinity -- of unseen other universes, some radically different from our own, some highly similar but distinct in crucial ways.

The film is a worthy and thought-provoking entertainment, but an idea that serves as a good plot device for imaginative counterfactual play in the realm of fiction becomes something very different when taken as an article of faith and used as an explanatory tool in science.

You see, there's a big divide running through physics, astronomy, and cosmology, and the idea of a multiverse is at the center of the controversy, serving as a crucial means of explaining away some powerful evidence for intelligent design.

The Fine-Tuning Problem

On one side of the controversy are scientists who see powerful evidence for purpose in the way the laws and constants of physics and chemistry are finely tuned to allow for life -- finely tuned to a mindboggling degree of precision.

Change gravity or the strong nuclear force or any of dozens of other constants even the tiniest bit, and no stars, no planets, no life. Why are the constants just so? Here's what Nobel Laureate Arno Penzias concluded: "Astronomy leads us to a unique event, a universe which was created out of nothing, one with the very delicate balance needed to provide exactly the conditions required to permit life, and one which has an underlying (one might say 'supernatural') plan."

Nobel Laureate George Smoot is another, commenting that "the big bang, the most cataclysmic event we can imagine, on closer inspection appears finely orchestrated." Elsewhere Smoot describes the ripples in the cosmic background radiation as the "fingerprints from the Maker."

On the other side of the divide are those who insist with Harvard's Richard Lewontin that they simply cannot "let a divine foot in the door." In the case of the fine-tuning problem, they keep "the divine foot" out with a pair of curious arguments. Each involves a fallacy, and one of them the idea of a multiverse.

Fine Tuning and the Firing Squad Fallacy

The first of these goes like this: Sure the universe is fine tuned for life. What did you expect? If it weren't we wouldn't be here to register our good fortune.

Think of a prisoner in front of a firing squad. The prisoner shuts his eyes. The shots are fired. The prisoner opens his eyes and finds a perfect bullet pattern outlining his body on the wall behind him. "Hey," the guard at his shoulder exclaims, "it looks like the firing squad had orders to miss!" The prisoner demurs. "No, the bullet pattern is just blind luck. You see, if they hadn't missed, I wouldn't be around to notice my good fortune."

The prisoner's mistaken reasoning is the same mistaken reasoning used to explain away the fine-tuning pattern in physics and cosmology. Reasonable Question: "What has the ability to produce the fine-tuning pattern we find in chemistry and physics?" Unreasonable Answer: "We wouldn't exist to observe the fine-tuning pattern if the pattern didn't exist."

The unreasonable answer points to a necessary condition for observing X when what's called for is a sufficient cause for X. Instead of providing a sufficient cause for the fine-tuning pattern, intelligent design opponents change the subject.

Fine Tuning and the Naïve Gambler's Fallacy

A second tactic for countering the fine-tuning argument to design runs like this: Our universe is just one of untold trillions of universes. Ours is just one of the lucky ones with the right parameters for life. True, we can't see or otherwise detect these other universes, but they must be out there because that solves the fine-tuning problem.

Consider an analogy. A naïve gambler is at a casino and, seeing a crowd forming around a poker table across the room, he goes over to investigate. He squeezes through the crowd and, whispering to another onlooker, learns that the mob boss there at the table lost a couple of poker hands and then gave the dealer a look that could kill, then on the next two hands the mobster laid down royal flushes, each time without exchanging any cards. Keep in mind that the odds of drawing even one royal flush in this way is about one chance in 650,000. The odds of it happening twice in a row are 1 chance in about 650,000 x 650,000.

At this point, a few of the other poker players at the table prudently compliment the mobster on his good fortune, cash in their chips and leave. The naïve gambler misses all of these clues, and a look of wonder blossoms across his face. On the next hand the mob boss lays down a third royal flush. The naïve gambler pulls up a calculator on his phone and punches in some numbers. "Wow!" he cries. "The odds of that happening three times in a row are worse than 1 chance in 274 thousand trillion! Imagine how much poker playing there must have been going on -- maybe is going on right now all over the world -- to make that run of luck possible!"

The naïve gambler hasn't explained the mobster's "run of luck." All he's done is overlook one reasonable explanation: intelligent design.

The naïve gambler's error is the same error committed by those who appeal to multiple, undetectable universes to explain the "luck" that gave us a universe fine-tuned to allow for intelligent observers.

A Forest Walker and a Lucky Bullet

Take another illustration, this one articulated by philosopher John Leslie to argue against inferring design from fine-tuning, but taken up by Roger White of MIT and cashed out in a very different way. White writes:

You are alone in the forest when a gun is fired from far away and you are hit. If at first you assume that there is no one out to get you, this would be surprising. But now suppose you were not in fact alone but instead part of a large crowd. Now it seems there is less reason for surprise at being shot. After all, someone in the crowd was bound to be shot, and it might as well have been you. [John] Leslie suggests this as an analogy for our situation with respect to the universe. Ironically, it seems that Leslie's story supports my case, against his. For it seems that while knowing that you are part of a crowd makes your being shot less surprising, being shot gives you no reason at all to suppose that you are part of a crowd. Suppose it is pitch dark and you have no idea if you are alone or part of a crowd. The bullet hits you. Do you really have any reason at all now to suppose that there are others around you?

So there in the dark forest the walker gets shot and thinks, "Gosh, I guess I'm really surrounded by lots and lots of other people even though I haven't heard a peep from any of them. That explains me getting shot by chance. A hunter's bullet accidentally found this crowd, and I'm just the unlucky fellow the bullet found." The reasoning is so defective you have to wonder if the walker got shot in the head and his powers of rational thought were blasted clean out of him.

The Lucky Bullet Fallacies Miss the Mark

In the firing squad analogy, the prisoner infers a lucky bullet pattern (rather than intentional one) based on the fact that if he hadn't been fortunate enough not to get shot, he wouldn't be there to observe the interesting bullet pattern. In the forest analogy, the walker mistakenly invokes many walkers on his way to deciding that a lucky bullet unluckily struck him.

The opponents of intelligent design in physics and cosmology often make a great show of being too rational to even consider intelligent design, but they attempt to shoot down the fine-tuning evidence of design by appealing to these irrational arguments. Both arguments go well wide of the mark.


There's an irony here. The universe is exquisitely fine-tuned to allow for intelligent designers, creatures able to see, hear, and reason, and to design things like telescopes and microscopes that allow us to uncover just how amazingly fine-tuned the universe is. Fine-tuning allows for intelligent designers such as ourselves, but atheists insist we cannot consider an intelligent designer as the cause for this fine-tuning. Fortunately for us, reason is prior to atheism.

Friday 2 December 2016

The origin of life and the design debate.

Paul Nelson Is Headed to Florida
Evolution News & Views 

CSC Senior Fellow and philosopher of science Paul Nelson has several interesting events in Florida coming up, starting tomorrow. Take a look at his schedule below for information and locations.

Dr. Nelson will give an introduction and Q&A at two showings of Illustra's newest documentary,  Origin: Design, Chance and the First Life on Earth. The first will be Saturday, December 3, at First United Church of Tarpon Springs (501 E. Tarpon Ave,. Tarpon Springs, FL), starting at 7 pm. The second showing will be Monday, December 5, at the University of South Florida, Gibbons Alumni Center (4202 E. Fowler Ave., Tampa, FL), also starting at 7 pm. Parking is available at the Sun Dome parking lots.

Dr. Nelson will speak at the Crossroads Church (7975 River Ridge Blvd., New Port Richey, FL), on Sunday, December 4 at 10:45 am.


Finally, the C.S. Lewis Society Coastal Holiday Luncheon will host Dr. Nelson as a special guest on Monday, December 5 at noon at the Rusty Pelican (2425 N. Rocky Point Dr., Tampa, FL) with a reception starting at 11:45 am. His topic: "Design in Focus." There is no charge, but you can reserve a place by emailing Tom Woodward at twoodward@trinitycollege.edu.

Taking the case for design on the road.

For Your Commute: Stephen Meyer's Darwin's Doubt and Signature in the Cell on Audiobook!
Evolution News & Views 

Here in Seattle, traffic can be more than a little tricky and commutes by road or rail seem to get longer all the time. That's why we are especially excited to announce that Stephen Meyer's New York Times bestseller Darwin's Doubt as well as Signature in the Cell will be released as audiobooks in December and are available for pre-order now!

Make your travel time enjoyable and productive next year with Meyer's books, read by Derek Shetterly. With Christmas around the bend, it's a great gift for those hard-to-buy-for professionals who have long commutes or travel regularly.


If you prefer to store audiobooks on your shelf with the rest of your library, check out the CD versions of Darwin's Doubt   or  Signature in the Cell. For those more digitally inclined, for a limited time, Amazon is offering Audible versions of the audiobooks for free with an Audible trial, so don't wait for the official release date. Pre-order the Audible versions of  Darwin's Doubt and  Signature in the Cell today!

Seeking the edge of Darwinism.

Best of Behe: Waiting Longer for Two Mutations
Michael Behe

Editor's note: In celebration of the 20th anniversary of biochemist Michael Behe's pathbreaking book  Darwin's Black Box and the release of the new documentary Revolutionary: Michael Behe and the Mystery of Molecular Machines , we are highlighting some of Behe's "greatest hits." The following was published by Discovery Institute on March 20, 2009. Remember to get your copy of  Revolutionary  now! See the trailer  here .


An interesting paper appeared in a 2008 issue of the journal Genetics, "Waiting for Two Mutations: With Applications to Regulatory Sequence Evolution and the Limits of Darwinian Evolution" (Durrett, R & Schmidt, D. 2008. Genetics 180: 1501-1509). As the title implies, it concerns the time one would have to wait for Darwinian processes to produce some helpful biological feature (here, regulatory sequences in DNA) if two mutations are required instead of just one. It is a theoretical paper, which uses models, math, and computer simulations to reach conclusions, rather than empirical data from field or lab experiments, as my book The Edge of Evolution does. The authors declare in the abstract of their manuscript that they aim "to expose flaws in some of Michael Behe's arguments concerning mathematical limits to Darwinian evolution." Unsurprisingly (bless their hearts), they pretty much do the exact opposite.

Since the journal Genetics publishes letters to the editors (most journals don't), I sent a reply to the journal. The original paper by Durrett and Schmidt can be found here, my response here, and their reply here.

In their paper, as I write in my reply:

They develop a population genetics model to estimate the waiting time for the occurrence of two mutations, one of which is premised to damage an existing transcription-factor-binding site, and the other of which creates a second, new binding site within the nearby region from a sequence that is already a near match with a binding site sequence (for example, 9 of 10 nucleotides already match).
The most novel point of their model is that, under some conditions, the number of organisms needed to get two mutations is proportional not to the inverse of the square of the point mutation rate (as it would be if both mutations had to appear simultaneously in the same organism), but to the inverse of the point mutation rate times the square root of the point mutation rate (because the first mutation would spread in the population before the second appeared, increasing the odds of getting a double mutation). To see what that means, consider that the point mutation rate is roughly one in a hundred million (1 in 10^8). So if two specific mutations had to occur at once, that would be an event of likelihood about 1 in 10^16. On the other hand, under some conditions they modeled, the likelihood would be about 1 in 10^12, ten thousand times more likely than the first situation. Durrett and Schmidt (2008) compare the number they got in their model to my literature citation1 that the probability of the development of chloroquine resistance in the malarial parasite is an event of order 1 in 10^20, and they remark that it "is 5 million times larger than the calculation we have just given." The implied conclusion is that I have greatly overstated the difficulty of getting two necessary mutations. Below I show that they are incorrect.

Serious Problems

Interesting as it is, there are some pretty serious problems in the way they applied their model to my arguments, some of which they owned up to in their reply, and some of which they didn't. When the problems are fixed, however, the resulting number is remarkably close to the empirical value of 1 in 10^20. I will go through the difficulties in turn.

The first problem was a simple oversight. They were modeling the mutation of a ten-nucleotide-long binding site for a regulatory protein in DNA, so they used a value for the mutation rate that was ten-times larger than the point mutation rate. However, in the chloroquine-resistance protein discussed in The Edge of Evolution, since particular amino acids have to be changed, the correct rate to use is the point mutation rate. That leads to an underestimate of a factor of about 30 in applying their model to the protein. As they wrote in their reply, "Behe is right on this point." I appreciate their agreement here.

The second problem has to do with their choice of model. In their original paper they actually developed models for two situations -- for when the first mutation is neutral, and for when it is deleterious. When they applied it to the chloroquine-resistance protein, they unfortunately decided to use the neutral model. However, it is very likely that the first protein mutation is deleterious. As I wrote discussing a hypothetical case in Chapter 6 of The Edge:

Suppose, however, that the first mutation wasn't a net plus; it was harmful. Only when both mutations occurred together was it beneficial. Then on average a person born with the mutation would leave fewer offspring than otherwise. The mutation would not increase in the population, and evolution would have to skip a step for it to take hold, because nature would need both necessary mutations at once.... The Darwinian magic works well only when intermediate steps are each better ('more fit') than preceding steps, so that the mutant gene increases in number in the population as natural selection favors the offspring of people who have it. Yet its usefulness quickly declines when intermediate steps are worse than earlier steps, and is pretty much worthless if several required intervening steps aren't improvements.
If the first mutation is indeed deleterious, then Durrett and Schmidt (2008) applied the wrong model to the chloroquine-resistance protein. In fact, if the parasite with the first mutation is only 10 percent as fit as the unmutated parasite, then the population-spreading effect they calculate for neutral mutations is pretty much eliminated, as their own model for deleterious mutations shows. What do the authors say in their response about this possibility? "We leave it to biologists to debate whether the first PfCRT mutation is that strongly deleterious." In other words, they don't know; it is outside their interest as mathematicians. (Again, I appreciate their candor in saying so.) Assuming that the first mutation is seriously deleterious, then their calculation is off by a factor of 10^4. In conjunction with the first mistake of 30-fold, their calculation so far is off by five-and-a-half orders of magnitude.

Making a String of Ones

The third problem also concerns the biology of the system. I'm at a bit of a loss here, because the problem is not hard to see, and yet in their reply they stoutly deny the mistake. In fact, they confidently assert it is I who am mistaken. I had written in my letter, ''... their model is incomplete on its own terms because it does not take into account the probability of one of the nine matching nucleotides in the region that is envisioned to become the new transcription-factor-binding site mutating to an incorrect nucleotide before the 10th mismatched codon mutates to the correct one.'' They retort, "This conclusion is simply wrong since it assumes that there is only one individual in the population with the first mutation." That's incorrect. Let me explain the problem in more detail.

Consider a string of ten digits, either 0 or 1. We start with a string that has nine 1's, and just one 0. We want to convert the single 0 to a 1 without switching any of the 1's to a zero. Suppose that the switch rate for each digit is one per hundred copies of the string. That is, we copy the string repeatedly, and, if we focus on a particular digit, about every hundredth copy or so that digit has changed. Okay, now cover all of the numbers of the string except the 0, and let a random, automated procedure copy the string, with a digit-mutation rate of one in a hundred. After, say, 79 copies, we see that the visible 0 has just changed to a 1. Now we uncover the rest of the digits. What is the likelihood that one of them has changed in the meantime? Since all the digits have the same mutation rate, then there is a nine in ten chance that one of the other digits has already changed from a 1 to a 0, and our mutated string still does not match the target of all 1's. In fact, only about one time out of ten will we uncover the string and find that no other digits have changed except the visible digit. Thus the effective mutation rate for transforming the string with nine matches out of ten to a string with ten matches out of ten will be only one tenth of the basic digit-mutation rate. If the string is a hundred long, the effective mutation rate will be one-hundredth the basic rate, and so on. (This is very similar to the problem of mutating a duplicate gene to a new selectable function before it suffers a degradative mutation, which has been investigated by Lynch and co-workers.2

So, despite their self-assured tone, in fact on this point Durrett and Schmidt are "simply wrong." And, as I write in my letter, since the gene for the chloroquine resistance protein has on the order of a thousand nucleotides, rather than just the ten of Durrett and Schmidt's postulated regulatory sequence, the effective rate for the second mutation is several orders of magnitude less than they thought. Thus with the, say, two orders of magnitude mistake here, the factor of 30 error for the initial mutation rate, and the four orders of magnitude for mistakenly using a neutral model instead of a deleterious model, Durrett and Schmidt's calculation is a cumulative seven and a half orders of magnitude off. Since they had pointed out that their calculation was about five million-fold (about six and a half orders of magnitude) lower than the empirical result I cited, when their errors are corrected the calculation agrees pretty well with the empirical data.

An Irrelevant Example

Now I'd like to turn to a couple of other points in Durrett and Schmidt's reply that aren't mistakes with their model, but which do reflect conceptual errors. As I quote above, they state in their reply, "This conclusion is simply wrong since it assumes that there is only one individual in the population with the first mutation." I have shown above that, despite their assertion, my conclusion is right. But where do they get the idea that "it assumes that there is only one individual in the population with the first mutation"? I wrote no such thing in my letter about "one individual." Furthermore, I "assumed" nothing. I merely cited empirical results from the literature. The figure of 1 in 10^20 is a citation from the literature on chloroquine resistance of malaria. Unlike their model, it is not a calculation on my part.

Right after this, in their reply Durrett and Schmidt say that the "mistake" I made is a common one, and they go on to illustrate "my" mistake with an example about a lottery winner. Yet their own example shows they are seriously confused about what is going on. They write:

When Evelyn Adams won the New Jersey lottery on October 23, 1985, and again on February 13, 1986, newspapers quoted odds of 17.1 trillion to 1. That assumes that the winning person and the two lottery dates are specified in advance, but at any point in time there is a population of individuals who have won the lottery and have a chance to win again, and there are many possible pairs of dates on which this event can happen.... The probability that it happens in one lottery 1 year is ~1 in 200.
No kidding. If one has millions of players, and any of the millions could win twice on any two dates, then the odds are certainly much better that somebody will win on some two dates then that Evelyn Adams win on October 23, 1985 and February 13, 1986. But that has absolutely nothing to do with the question of changing a correct nucleotide to an incorrect one before changing an incorrect one to a correct one, which is the context in which this odd digression appears. What's more, it is not the type of situation that Durrett and Schmidt themselves modeled. They asked the question, given a particular ten-base-pair regulatory sequence, and a particular sequence that is matched in nine of ten sites to the regulatory sequence, how long will it take to mutate the particular regulatory sequence, destroying it, and then mutate the particular near-match sequence to a perfect-match sequence? What's even more, it is not the situation that pertains in chloroquine resistance in malaria. There several particular amino acid residues in a particular protein (PfCRT) have to mutate to yield effective resistance. It seems to me that the lottery example must be a favorite of Durrett and Schmidt's, and that they were determined to use it whether it fit the situation or not.

Multiplying Resources

The final conceptual error that Durrett and Schmidt commit is the gratuitous multiplication of probabilistic resources. In their original paper they calculated that the appearance of a particular double mutation in humans would have an expected time of appearance of 216 million years, if one were considering a one kilobase region of the genome. Since the evolution of humans from other primates took much less time than that, Durrett and Schmidt observed that if the DNA "neighborhood" were a thousand times larger, then lots of correct regulatory sites would already be expected to be there. But, then, exactly what is the model? And if the relevant neighborhood is much larger, why did they model a smaller neighborhood? Is there some biological fact they neglected to cite that justified the thousand-fold expansion of what constitutes a "neighborhood," or were they just trying to squeeze their results post-hoc into what a priori was thought to be a reasonable time frame?

When I pointed this out in my letter, Durrett and Schmidt did not address the problem. Rather, they upped the stakes. They write in their reply, "there are at least 20,000 genes in the human genome and for each gene tens if not hundreds of pairs of mutations that can occur in each one." The implication is that there are very, very many ways to get two mutations. Well, if that were indeed the case, why did they model a situation where two particular mutations -- not just any two -- were needed? Why didn't they model the situation where any two mutations in any of 20,000 genes would suffice? In fact, since that would give a very much shorter time span, why did the journal Genetics and the reviewers of the paper let them get away with such a miscalculation?

The answer of course is that in almost any particular situation, almost all possible double mutations (and single mutations and triple mutations and so on) will be useless. Consider the chloroquine-resistance mutation in malaria. There are about 10^6 possible single amino acid mutations in malarial parasite proteins, and 10^12 possible double amino acid mutations (where the changes could be in any two proteins). Yet only a handful are known to be useful to the parasite in fending off the antibiotic, and only one is very effective -- the multiple changes in PfCRT. It would be silly to think that just any two mutations would help. The vast majority are completely ineffective. Nonetheless, it is a common conceptual mistake to naively multiply postulated "helpful mutations" when the numbers initially show too few.

A Very Important Point

Here's a final important point. Genetics is an excellent journal; its editors and reviewers are top notch; and Durrett and Schmidt themselves are fine researchers. Yet, as I show above, when simple mistakes in the application of their model to malaria are corrected, it agrees closely with empirical results reported from the field that I cited. This is very strong support that the central contention of The Edge of Evolution is correct: that it is an extremely difficult evolutionary task for multiple required mutations to occur through Darwinian means, especially if one of the mutations is deleterious. And, as I argue in the book, reasonable application of this point to the protein machinery of the cell makes it very unlikely that life developed through a Darwinian mechanism.

References:

(1) White, N. J., 2004 Antimalarial drug resistance. J. Clin. Invest. 113: 1084-1092.


(2) Lynch, M. and Conery, J.S. 2000. The evolutionary fate and consequences of duplicate genes. Science 290: 1151-1155.


Sniping from the dark?

The Evolutionary Argument from Ignorance

Cornelius Hunter 


Yesterday I looked at the  enormous problems
 that the DNA, or genetic, code pose for evolutionary theory. 
 Here
, previously noted at ,
Evolution News
 is a paper that seems to have come to the same conclusion. The authors argue that the underlying patterns of the genetic code are not likely to be due to "chance coupled with presumable evolutionary pathways" (P-value < 10^-13), and conclude that they are "essentially irreducible to any natural origin."

A common response from evolutionists, when presented with evidence such as this, is that we still don't understand biology very well. This argument from ignorance goes all the way back to Darwin. He used it in Chapter 6 of the Origin to discard the problem of evolving the electric organs in fish, such as the electric eel (which isn't actually an eel). The Sage from Kent agreed that it is "impossible to conceive by what steps these wondrous organs" evolved, but that was OK, because "we do not even know of what use they are."

Setting aside the fact that Darwin's argument from ignorance was a non-scientific fallacy; it also was a set up for failure. For now, a century and half later, we do know "what use they are." And it has just  gotten worse 
 for evolution.

It is another demonstration that arguments from ignorance, aside from being terrible arguments, are not good science. The truth is, when evolutionists today claim that the many problems with their chance theory are due to a lack of knowledge, they are throwing up a smoke screen.