Search This Blog

Saturday, 3 December 2016

Censoring compassion?

France Censors Down Syndrome Ad Over Abortion
Wesley J. Smith





Conscience is a good thing. It is the path to repentance, forgiveness, and healing. Take Project Rachel, the compassionate pro-life project that aids women overcome the grief and guilt some experience from having had an abortion.

But France doesn't want women to feel badly for having aborted a Down syndrome baby. Accordingly, it censored an advertisement to air that shows the positive side of parenting a child with Down. From the Wall Street Journal:

Abortion is legal in most of Europe, but its proponents are bent on suppressing efforts to change the minds of mothers considering it.

Witness France's ban on a television commercial showing happy children with Down Syndrome (DS). Produced to commemorate World Down Syndrome Day, the commercial showed several cheerful children with DS addressing a mother considering abortion. "Dear future mom," says one, "don't be afraid." "Your child will be able to do many things," says another. "He'll be able to hug you." "He'll be able to run toward you." "He'll be able to speak and tell you he loves you."

France's High Audiovisual Council removed the commercial from air earlier this year, and in November the Council of State, the country's highest administrative court, upheld the ban, since the clip could "disturb the conscience" of French women who had aborted DS fetuses.

So much for free speech. Worse, France is saying saving the lives of these future children is less important than protecting the feelings of those who aborted their babies.


More broadly, it reflects a rampant view that aborting Down babies is the preferred course. Indeed, this censoring is merely a small part of an effort, easily discernible, to see people with Down disappeared from the face of the earth via eugenic abortion.

On explaining away cosmic fine tuning.

Dr. Strange Introduces the Multiverse to the Masses
Jonathan Witt 

This month's blockbuster Marvel comic book movie Dr. Strange will serve as many people's introduction to the exotic idea of the multiverse, the notion that besides our universe there are a host -- maybe an infinity -- of unseen other universes, some radically different from our own, some highly similar but distinct in crucial ways.

The film is a worthy and thought-provoking entertainment, but an idea that serves as a good plot device for imaginative counterfactual play in the realm of fiction becomes something very different when taken as an article of faith and used as an explanatory tool in science.

You see, there's a big divide running through physics, astronomy, and cosmology, and the idea of a multiverse is at the center of the controversy, serving as a crucial means of explaining away some powerful evidence for intelligent design.

The Fine-Tuning Problem

On one side of the controversy are scientists who see powerful evidence for purpose in the way the laws and constants of physics and chemistry are finely tuned to allow for life -- finely tuned to a mindboggling degree of precision.

Change gravity or the strong nuclear force or any of dozens of other constants even the tiniest bit, and no stars, no planets, no life. Why are the constants just so? Here's what Nobel Laureate Arno Penzias concluded: "Astronomy leads us to a unique event, a universe which was created out of nothing, one with the very delicate balance needed to provide exactly the conditions required to permit life, and one which has an underlying (one might say 'supernatural') plan."

Nobel Laureate George Smoot is another, commenting that "the big bang, the most cataclysmic event we can imagine, on closer inspection appears finely orchestrated." Elsewhere Smoot describes the ripples in the cosmic background radiation as the "fingerprints from the Maker."

On the other side of the divide are those who insist with Harvard's Richard Lewontin that they simply cannot "let a divine foot in the door." In the case of the fine-tuning problem, they keep "the divine foot" out with a pair of curious arguments. Each involves a fallacy, and one of them the idea of a multiverse.

Fine Tuning and the Firing Squad Fallacy

The first of these goes like this: Sure the universe is fine tuned for life. What did you expect? If it weren't we wouldn't be here to register our good fortune.

Think of a prisoner in front of a firing squad. The prisoner shuts his eyes. The shots are fired. The prisoner opens his eyes and finds a perfect bullet pattern outlining his body on the wall behind him. "Hey," the guard at his shoulder exclaims, "it looks like the firing squad had orders to miss!" The prisoner demurs. "No, the bullet pattern is just blind luck. You see, if they hadn't missed, I wouldn't be around to notice my good fortune."

The prisoner's mistaken reasoning is the same mistaken reasoning used to explain away the fine-tuning pattern in physics and cosmology. Reasonable Question: "What has the ability to produce the fine-tuning pattern we find in chemistry and physics?" Unreasonable Answer: "We wouldn't exist to observe the fine-tuning pattern if the pattern didn't exist."

The unreasonable answer points to a necessary condition for observing X when what's called for is a sufficient cause for X. Instead of providing a sufficient cause for the fine-tuning pattern, intelligent design opponents change the subject.

Fine Tuning and the Naïve Gambler's Fallacy

A second tactic for countering the fine-tuning argument to design runs like this: Our universe is just one of untold trillions of universes. Ours is just one of the lucky ones with the right parameters for life. True, we can't see or otherwise detect these other universes, but they must be out there because that solves the fine-tuning problem.

Consider an analogy. A naïve gambler is at a casino and, seeing a crowd forming around a poker table across the room, he goes over to investigate. He squeezes through the crowd and, whispering to another onlooker, learns that the mob boss there at the table lost a couple of poker hands and then gave the dealer a look that could kill, then on the next two hands the mobster laid down royal flushes, each time without exchanging any cards. Keep in mind that the odds of drawing even one royal flush in this way is about one chance in 650,000. The odds of it happening twice in a row are 1 chance in about 650,000 x 650,000.

At this point, a few of the other poker players at the table prudently compliment the mobster on his good fortune, cash in their chips and leave. The naïve gambler misses all of these clues, and a look of wonder blossoms across his face. On the next hand the mob boss lays down a third royal flush. The naïve gambler pulls up a calculator on his phone and punches in some numbers. "Wow!" he cries. "The odds of that happening three times in a row are worse than 1 chance in 274 thousand trillion! Imagine how much poker playing there must have been going on -- maybe is going on right now all over the world -- to make that run of luck possible!"

The naïve gambler hasn't explained the mobster's "run of luck." All he's done is overlook one reasonable explanation: intelligent design.

The naïve gambler's error is the same error committed by those who appeal to multiple, undetectable universes to explain the "luck" that gave us a universe fine-tuned to allow for intelligent observers.

A Forest Walker and a Lucky Bullet

Take another illustration, this one articulated by philosopher John Leslie to argue against inferring design from fine-tuning, but taken up by Roger White of MIT and cashed out in a very different way. White writes:

You are alone in the forest when a gun is fired from far away and you are hit. If at first you assume that there is no one out to get you, this would be surprising. But now suppose you were not in fact alone but instead part of a large crowd. Now it seems there is less reason for surprise at being shot. After all, someone in the crowd was bound to be shot, and it might as well have been you. [John] Leslie suggests this as an analogy for our situation with respect to the universe. Ironically, it seems that Leslie's story supports my case, against his. For it seems that while knowing that you are part of a crowd makes your being shot less surprising, being shot gives you no reason at all to suppose that you are part of a crowd. Suppose it is pitch dark and you have no idea if you are alone or part of a crowd. The bullet hits you. Do you really have any reason at all now to suppose that there are others around you?

So there in the dark forest the walker gets shot and thinks, "Gosh, I guess I'm really surrounded by lots and lots of other people even though I haven't heard a peep from any of them. That explains me getting shot by chance. A hunter's bullet accidentally found this crowd, and I'm just the unlucky fellow the bullet found." The reasoning is so defective you have to wonder if the walker got shot in the head and his powers of rational thought were blasted clean out of him.

The Lucky Bullet Fallacies Miss the Mark

In the firing squad analogy, the prisoner infers a lucky bullet pattern (rather than intentional one) based on the fact that if he hadn't been fortunate enough not to get shot, he wouldn't be there to observe the interesting bullet pattern. In the forest analogy, the walker mistakenly invokes many walkers on his way to deciding that a lucky bullet unluckily struck him.

The opponents of intelligent design in physics and cosmology often make a great show of being too rational to even consider intelligent design, but they attempt to shoot down the fine-tuning evidence of design by appealing to these irrational arguments. Both arguments go well wide of the mark.


There's an irony here. The universe is exquisitely fine-tuned to allow for intelligent designers, creatures able to see, hear, and reason, and to design things like telescopes and microscopes that allow us to uncover just how amazingly fine-tuned the universe is. Fine-tuning allows for intelligent designers such as ourselves, but atheists insist we cannot consider an intelligent designer as the cause for this fine-tuning. Fortunately for us, reason is prior to atheism.

Friday, 2 December 2016

The origin of life and the design debate.

Paul Nelson Is Headed to Florida
Evolution News & Views 

CSC Senior Fellow and philosopher of science Paul Nelson has several interesting events in Florida coming up, starting tomorrow. Take a look at his schedule below for information and locations.

Dr. Nelson will give an introduction and Q&A at two showings of Illustra's newest documentary,  Origin: Design, Chance and the First Life on Earth. The first will be Saturday, December 3, at First United Church of Tarpon Springs (501 E. Tarpon Ave,. Tarpon Springs, FL), starting at 7 pm. The second showing will be Monday, December 5, at the University of South Florida, Gibbons Alumni Center (4202 E. Fowler Ave., Tampa, FL), also starting at 7 pm. Parking is available at the Sun Dome parking lots.

Dr. Nelson will speak at the Crossroads Church (7975 River Ridge Blvd., New Port Richey, FL), on Sunday, December 4 at 10:45 am.


Finally, the C.S. Lewis Society Coastal Holiday Luncheon will host Dr. Nelson as a special guest on Monday, December 5 at noon at the Rusty Pelican (2425 N. Rocky Point Dr., Tampa, FL) with a reception starting at 11:45 am. His topic: "Design in Focus." There is no charge, but you can reserve a place by emailing Tom Woodward at twoodward@trinitycollege.edu.

Taking the case for design on the road.

For Your Commute: Stephen Meyer's Darwin's Doubt and Signature in the Cell on Audiobook!
Evolution News & Views 

Here in Seattle, traffic can be more than a little tricky and commutes by road or rail seem to get longer all the time. That's why we are especially excited to announce that Stephen Meyer's New York Times bestseller Darwin's Doubt as well as Signature in the Cell will be released as audiobooks in December and are available for pre-order now!

Make your travel time enjoyable and productive next year with Meyer's books, read by Derek Shetterly. With Christmas around the bend, it's a great gift for those hard-to-buy-for professionals who have long commutes or travel regularly.


If you prefer to store audiobooks on your shelf with the rest of your library, check out the CD versions of Darwin's Doubt   or  Signature in the Cell. For those more digitally inclined, for a limited time, Amazon is offering Audible versions of the audiobooks for free with an Audible trial, so don't wait for the official release date. Pre-order the Audible versions of  Darwin's Doubt and  Signature in the Cell today!

Seeking the edge of Darwinism.

Best of Behe: Waiting Longer for Two Mutations
Michael Behe

Editor's note: In celebration of the 20th anniversary of biochemist Michael Behe's pathbreaking book  Darwin's Black Box and the release of the new documentary Revolutionary: Michael Behe and the Mystery of Molecular Machines , we are highlighting some of Behe's "greatest hits." The following was published by Discovery Institute on March 20, 2009. Remember to get your copy of  Revolutionary  now! See the trailer  here .


An interesting paper appeared in a 2008 issue of the journal Genetics, "Waiting for Two Mutations: With Applications to Regulatory Sequence Evolution and the Limits of Darwinian Evolution" (Durrett, R & Schmidt, D. 2008. Genetics 180: 1501-1509). As the title implies, it concerns the time one would have to wait for Darwinian processes to produce some helpful biological feature (here, regulatory sequences in DNA) if two mutations are required instead of just one. It is a theoretical paper, which uses models, math, and computer simulations to reach conclusions, rather than empirical data from field or lab experiments, as my book The Edge of Evolution does. The authors declare in the abstract of their manuscript that they aim "to expose flaws in some of Michael Behe's arguments concerning mathematical limits to Darwinian evolution." Unsurprisingly (bless their hearts), they pretty much do the exact opposite.

Since the journal Genetics publishes letters to the editors (most journals don't), I sent a reply to the journal. The original paper by Durrett and Schmidt can be found here, my response here, and their reply here.

In their paper, as I write in my reply:

They develop a population genetics model to estimate the waiting time for the occurrence of two mutations, one of which is premised to damage an existing transcription-factor-binding site, and the other of which creates a second, new binding site within the nearby region from a sequence that is already a near match with a binding site sequence (for example, 9 of 10 nucleotides already match).
The most novel point of their model is that, under some conditions, the number of organisms needed to get two mutations is proportional not to the inverse of the square of the point mutation rate (as it would be if both mutations had to appear simultaneously in the same organism), but to the inverse of the point mutation rate times the square root of the point mutation rate (because the first mutation would spread in the population before the second appeared, increasing the odds of getting a double mutation). To see what that means, consider that the point mutation rate is roughly one in a hundred million (1 in 10^8). So if two specific mutations had to occur at once, that would be an event of likelihood about 1 in 10^16. On the other hand, under some conditions they modeled, the likelihood would be about 1 in 10^12, ten thousand times more likely than the first situation. Durrett and Schmidt (2008) compare the number they got in their model to my literature citation1 that the probability of the development of chloroquine resistance in the malarial parasite is an event of order 1 in 10^20, and they remark that it "is 5 million times larger than the calculation we have just given." The implied conclusion is that I have greatly overstated the difficulty of getting two necessary mutations. Below I show that they are incorrect.

Serious Problems

Interesting as it is, there are some pretty serious problems in the way they applied their model to my arguments, some of which they owned up to in their reply, and some of which they didn't. When the problems are fixed, however, the resulting number is remarkably close to the empirical value of 1 in 10^20. I will go through the difficulties in turn.

The first problem was a simple oversight. They were modeling the mutation of a ten-nucleotide-long binding site for a regulatory protein in DNA, so they used a value for the mutation rate that was ten-times larger than the point mutation rate. However, in the chloroquine-resistance protein discussed in The Edge of Evolution, since particular amino acids have to be changed, the correct rate to use is the point mutation rate. That leads to an underestimate of a factor of about 30 in applying their model to the protein. As they wrote in their reply, "Behe is right on this point." I appreciate their agreement here.

The second problem has to do with their choice of model. In their original paper they actually developed models for two situations -- for when the first mutation is neutral, and for when it is deleterious. When they applied it to the chloroquine-resistance protein, they unfortunately decided to use the neutral model. However, it is very likely that the first protein mutation is deleterious. As I wrote discussing a hypothetical case in Chapter 6 of The Edge:

Suppose, however, that the first mutation wasn't a net plus; it was harmful. Only when both mutations occurred together was it beneficial. Then on average a person born with the mutation would leave fewer offspring than otherwise. The mutation would not increase in the population, and evolution would have to skip a step for it to take hold, because nature would need both necessary mutations at once.... The Darwinian magic works well only when intermediate steps are each better ('more fit') than preceding steps, so that the mutant gene increases in number in the population as natural selection favors the offspring of people who have it. Yet its usefulness quickly declines when intermediate steps are worse than earlier steps, and is pretty much worthless if several required intervening steps aren't improvements.
If the first mutation is indeed deleterious, then Durrett and Schmidt (2008) applied the wrong model to the chloroquine-resistance protein. In fact, if the parasite with the first mutation is only 10 percent as fit as the unmutated parasite, then the population-spreading effect they calculate for neutral mutations is pretty much eliminated, as their own model for deleterious mutations shows. What do the authors say in their response about this possibility? "We leave it to biologists to debate whether the first PfCRT mutation is that strongly deleterious." In other words, they don't know; it is outside their interest as mathematicians. (Again, I appreciate their candor in saying so.) Assuming that the first mutation is seriously deleterious, then their calculation is off by a factor of 10^4. In conjunction with the first mistake of 30-fold, their calculation so far is off by five-and-a-half orders of magnitude.

Making a String of Ones

The third problem also concerns the biology of the system. I'm at a bit of a loss here, because the problem is not hard to see, and yet in their reply they stoutly deny the mistake. In fact, they confidently assert it is I who am mistaken. I had written in my letter, ''... their model is incomplete on its own terms because it does not take into account the probability of one of the nine matching nucleotides in the region that is envisioned to become the new transcription-factor-binding site mutating to an incorrect nucleotide before the 10th mismatched codon mutates to the correct one.'' They retort, "This conclusion is simply wrong since it assumes that there is only one individual in the population with the first mutation." That's incorrect. Let me explain the problem in more detail.

Consider a string of ten digits, either 0 or 1. We start with a string that has nine 1's, and just one 0. We want to convert the single 0 to a 1 without switching any of the 1's to a zero. Suppose that the switch rate for each digit is one per hundred copies of the string. That is, we copy the string repeatedly, and, if we focus on a particular digit, about every hundredth copy or so that digit has changed. Okay, now cover all of the numbers of the string except the 0, and let a random, automated procedure copy the string, with a digit-mutation rate of one in a hundred. After, say, 79 copies, we see that the visible 0 has just changed to a 1. Now we uncover the rest of the digits. What is the likelihood that one of them has changed in the meantime? Since all the digits have the same mutation rate, then there is a nine in ten chance that one of the other digits has already changed from a 1 to a 0, and our mutated string still does not match the target of all 1's. In fact, only about one time out of ten will we uncover the string and find that no other digits have changed except the visible digit. Thus the effective mutation rate for transforming the string with nine matches out of ten to a string with ten matches out of ten will be only one tenth of the basic digit-mutation rate. If the string is a hundred long, the effective mutation rate will be one-hundredth the basic rate, and so on. (This is very similar to the problem of mutating a duplicate gene to a new selectable function before it suffers a degradative mutation, which has been investigated by Lynch and co-workers.2

So, despite their self-assured tone, in fact on this point Durrett and Schmidt are "simply wrong." And, as I write in my letter, since the gene for the chloroquine resistance protein has on the order of a thousand nucleotides, rather than just the ten of Durrett and Schmidt's postulated regulatory sequence, the effective rate for the second mutation is several orders of magnitude less than they thought. Thus with the, say, two orders of magnitude mistake here, the factor of 30 error for the initial mutation rate, and the four orders of magnitude for mistakenly using a neutral model instead of a deleterious model, Durrett and Schmidt's calculation is a cumulative seven and a half orders of magnitude off. Since they had pointed out that their calculation was about five million-fold (about six and a half orders of magnitude) lower than the empirical result I cited, when their errors are corrected the calculation agrees pretty well with the empirical data.

An Irrelevant Example

Now I'd like to turn to a couple of other points in Durrett and Schmidt's reply that aren't mistakes with their model, but which do reflect conceptual errors. As I quote above, they state in their reply, "This conclusion is simply wrong since it assumes that there is only one individual in the population with the first mutation." I have shown above that, despite their assertion, my conclusion is right. But where do they get the idea that "it assumes that there is only one individual in the population with the first mutation"? I wrote no such thing in my letter about "one individual." Furthermore, I "assumed" nothing. I merely cited empirical results from the literature. The figure of 1 in 10^20 is a citation from the literature on chloroquine resistance of malaria. Unlike their model, it is not a calculation on my part.

Right after this, in their reply Durrett and Schmidt say that the "mistake" I made is a common one, and they go on to illustrate "my" mistake with an example about a lottery winner. Yet their own example shows they are seriously confused about what is going on. They write:

When Evelyn Adams won the New Jersey lottery on October 23, 1985, and again on February 13, 1986, newspapers quoted odds of 17.1 trillion to 1. That assumes that the winning person and the two lottery dates are specified in advance, but at any point in time there is a population of individuals who have won the lottery and have a chance to win again, and there are many possible pairs of dates on which this event can happen.... The probability that it happens in one lottery 1 year is ~1 in 200.
No kidding. If one has millions of players, and any of the millions could win twice on any two dates, then the odds are certainly much better that somebody will win on some two dates then that Evelyn Adams win on October 23, 1985 and February 13, 1986. But that has absolutely nothing to do with the question of changing a correct nucleotide to an incorrect one before changing an incorrect one to a correct one, which is the context in which this odd digression appears. What's more, it is not the type of situation that Durrett and Schmidt themselves modeled. They asked the question, given a particular ten-base-pair regulatory sequence, and a particular sequence that is matched in nine of ten sites to the regulatory sequence, how long will it take to mutate the particular regulatory sequence, destroying it, and then mutate the particular near-match sequence to a perfect-match sequence? What's even more, it is not the situation that pertains in chloroquine resistance in malaria. There several particular amino acid residues in a particular protein (PfCRT) have to mutate to yield effective resistance. It seems to me that the lottery example must be a favorite of Durrett and Schmidt's, and that they were determined to use it whether it fit the situation or not.

Multiplying Resources

The final conceptual error that Durrett and Schmidt commit is the gratuitous multiplication of probabilistic resources. In their original paper they calculated that the appearance of a particular double mutation in humans would have an expected time of appearance of 216 million years, if one were considering a one kilobase region of the genome. Since the evolution of humans from other primates took much less time than that, Durrett and Schmidt observed that if the DNA "neighborhood" were a thousand times larger, then lots of correct regulatory sites would already be expected to be there. But, then, exactly what is the model? And if the relevant neighborhood is much larger, why did they model a smaller neighborhood? Is there some biological fact they neglected to cite that justified the thousand-fold expansion of what constitutes a "neighborhood," or were they just trying to squeeze their results post-hoc into what a priori was thought to be a reasonable time frame?

When I pointed this out in my letter, Durrett and Schmidt did not address the problem. Rather, they upped the stakes. They write in their reply, "there are at least 20,000 genes in the human genome and for each gene tens if not hundreds of pairs of mutations that can occur in each one." The implication is that there are very, very many ways to get two mutations. Well, if that were indeed the case, why did they model a situation where two particular mutations -- not just any two -- were needed? Why didn't they model the situation where any two mutations in any of 20,000 genes would suffice? In fact, since that would give a very much shorter time span, why did the journal Genetics and the reviewers of the paper let them get away with such a miscalculation?

The answer of course is that in almost any particular situation, almost all possible double mutations (and single mutations and triple mutations and so on) will be useless. Consider the chloroquine-resistance mutation in malaria. There are about 10^6 possible single amino acid mutations in malarial parasite proteins, and 10^12 possible double amino acid mutations (where the changes could be in any two proteins). Yet only a handful are known to be useful to the parasite in fending off the antibiotic, and only one is very effective -- the multiple changes in PfCRT. It would be silly to think that just any two mutations would help. The vast majority are completely ineffective. Nonetheless, it is a common conceptual mistake to naively multiply postulated "helpful mutations" when the numbers initially show too few.

A Very Important Point

Here's a final important point. Genetics is an excellent journal; its editors and reviewers are top notch; and Durrett and Schmidt themselves are fine researchers. Yet, as I show above, when simple mistakes in the application of their model to malaria are corrected, it agrees closely with empirical results reported from the field that I cited. This is very strong support that the central contention of The Edge of Evolution is correct: that it is an extremely difficult evolutionary task for multiple required mutations to occur through Darwinian means, especially if one of the mutations is deleterious. And, as I argue in the book, reasonable application of this point to the protein machinery of the cell makes it very unlikely that life developed through a Darwinian mechanism.

References:

(1) White, N. J., 2004 Antimalarial drug resistance. J. Clin. Invest. 113: 1084-1092.


(2) Lynch, M. and Conery, J.S. 2000. The evolutionary fate and consequences of duplicate genes. Science 290: 1151-1155.


Sniping from the dark?

The Evolutionary Argument from Ignorance

Cornelius Hunter 


Yesterday I looked at the  enormous problems
 that the DNA, or genetic, code pose for evolutionary theory. 
 Here
, previously noted at ,
Evolution News
 is a paper that seems to have come to the same conclusion. The authors argue that the underlying patterns of the genetic code are not likely to be due to "chance coupled with presumable evolutionary pathways" (P-value < 10^-13), and conclude that they are "essentially irreducible to any natural origin."

A common response from evolutionists, when presented with evidence such as this, is that we still don't understand biology very well. This argument from ignorance goes all the way back to Darwin. He used it in Chapter 6 of the Origin to discard the problem of evolving the electric organs in fish, such as the electric eel (which isn't actually an eel). The Sage from Kent agreed that it is "impossible to conceive by what steps these wondrous organs" evolved, but that was OK, because "we do not even know of what use they are."

Setting aside the fact that Darwin's argument from ignorance was a non-scientific fallacy; it also was a set up for failure. For now, a century and half later, we do know "what use they are." And it has just  gotten worse 
 for evolution.

It is another demonstration that arguments from ignorance, aside from being terrible arguments, are not good science. The truth is, when evolutionists today claim that the many problems with their chance theory are due to a lack of knowledge, they are throwing up a smoke screen.

Wednesday, 30 November 2016

A Whale of a problem for Darwinism II

Using I.D to disprove I.D.

"What about evolution is random and what is not?"
Robert Crowther

Here's another one for my "you can't make this stuff up" file. I kid you not, this is a news story about a new peer-reviewed paper in PLoS Biology by Brian Paegel and Gerald Joyce of The Scripps Research Institute which explains that (all emphasis from here on is mine)

they have produced a computer-controlled system that can drive the evolution of improved RNA enzymes.
I couldn't write a funnier script if I tried. Sadly, these guys just don't get the joke.

The evolution of molecules via scientific experiment is not new. The first RNA enzymes to be "evolved" in the lab were generated in the 1990s. But what is exciting about this work is that the process has been made automatic. Thus evolution is directed by a machine without requiring human intervention-other then providing the initial ingredients and switching the machine on.
But wait it gets better.
Throughout the process, the evolution-machine can propagate the reaction itself, because whenever the enzyme population size reaches a predetermined level, the machine removes a fraction of the population and replaces the starting chemicals needed for the reaction to continue.
What? Predetermined? Predetermined by whom or by what? Oh, the evolution machine, which itself is a result of intelligent agency.
The authors sum it all up very nicely.


This beautifully illustrates what about evolution is random and what is not.

Missing links v. Darwin.

Billions of Missing Links: Hen's Eggs

Geoffrey Simmons 


Note: This is one of a series of posts excerpted from my book,  Billions of Missing Links: A Rational Look at the Mysteries Evolution Can't Explain
.

When it comes to citing examples of purposeful design, nearly every author likes to point out the hen's egg. It's really quite remarkable. Despite having a shell that is a mere 0.35 mm think, they don't break when a parent sits on them. According to Dr. Knut Schmidt-Nielsen,

A bird egg is a mechanical structure strong enough to hold a chick securely during development, yet weak enough to break out of. The shell must let oxygen in and carbon dioxide out, yet be sufficiently impermeable to water to keep the contents from drying out.
Under microscopy, one can see the shell is a foamlike structure that resists cracking. Gases and water pass through 10,000 pores that average 17 micrometers in diameter. Ultimately, 6 liters of oxygen will have been taken in and 4.5 liters of carbon dioxide given off. The yolk is its food. All life support systems are self-contained, like a space shuttle.
All hen's eggs are ready to hatch on the twenty-first day. Every day is precisely preprogrammed. The heart starts beating on the sixth day. On the nineteenth day the embryo uses its egg tooth to puncture the air sac (beneath the flat end) and then takes two days to crack through the shell.

Giving natural selection a hand?

The Rest of the Story -- Eugenics, Racism, Darwinism

Sarah Chaffee 



According to its most ardent proponents, a widespread embrace of evolutionary theory is a big win-win not only for science but for culture and ethics. Our recent report "Darwin's Corrosive Idea" handily dispels that rosy picture as it pertains to the present day. As for history, Jason Jones and John Zmirak writing at The Stream helpfully remind readers of the link between eugenics, racism, and Darwinism.

Their specific topic is Margaret Sanger and the documentary Maafa 21: Black Genocide. Here's what they say about Darwin and how his arguments were used to justify eugenics:

The eugenicists' arrogant certainty that, because they had inherited money and power, they were genetically superior to the rest of the human race, found in Charles Darwin's theories an ideal pretext and a program: to take the survival of the fittest and make it happen faster, by stopping the "unfit" from breeding. The goal, in Margaret Sanger's own words, was "More Children from the Fit, Fewer from the Unfit." Instead of seeing the poor as victims of injustice or targets for Christian charity, the materialism these elitists took from Darwin assured them that the poor were themselves the problem -- that they were inferior, deficient and dangerous down to the marrow of their bones.

The authors note that the eugenics movement itself was undergirded by racism. The video Maafa 21, they note, links the rise of eugenics to white anxiety about the "negro problem" following the end of the Civil War.

In his book Darwin Day in America, Center for Science & Culture associate director John West has written extensively about the social damage linked to Darwinism.

Jones and Zmirak bring up some harrowing examples, among them the observation that Sanger's friend Lothrop Stoddard was a leader in the Massachusetts Klu Klux Klan and wrote a book Hitler called his "bible." A speaker Sanger invited to a population conference, Eugen Fisher, had operated a concentration camp in Africa imprisoning natives. Jones and Zimrak note, "It was Fischer's book on eugenics, which Hitler had read in prison, that convinced Hitler of its central importance." For more historical background, read historian Richard Weikart's books including his most recent, Hitler's Religion.

They say that history is written by the victors. With evolutionary theory holding sway in the media and academia, it's little wonder we rarely hear about these connections and events.

Life's machine code v. Darwin.

My Dear Watson: Four Observations on the DNA Code and Evolution

Cornelius Hunter 


The DNA code is used in cells to translate a sequence of nucleotides into a sequence of amino acids, which then make up a protein. In the past fifty years we have learned four important things about the code:

1. The DNA code is universal. There are minor variations scattered about, but the same canonical code is found across the species.

2. The DNA code is special. The DNA is not just some random, off the shelf, code. It has unique properties that, for example, make the translation process more robust to mutations. The code has been called "one in a million
 ," but it probably is even more special than that. One 
 study
  found that the code optimizes "a combination of several different functions simultaneously."

3. Some of the special properties of the DNA code only rarely confer benefit. Many of the code's special properties deal with rare mutation events. If such properties could arise via random mutation in an individual organism, their benefit would not be common.

4. The DNA code's fitness landscape has dependencies on the DNA coding sequences and so favors stasis. Changes in the DNA code may well wreak havoc as the DNA coding sequences are suddenly not interpreted correctly. So the fitness landscape, at any given location in the code design space, is not only rugged but often is a local minimum, thus freezing evolution at that code.

Observation #1 above, according to evolutionary theory, means that the code is the ultimate homology and must have been present in the last universal common ancestor (LUCA). There was essentially zero evolution of the code allowed over the course of billions of years.

This code stasis can be understood, from an evolutionary perspective, using Observation #4. Given the many dependencies on the DNA coding sequences, the code can be understood to be at a local minimum and so impossible to evolve.

Hence Francis Crick's characterization, and subsequent promotion by later evolutionists, of the code as a "frozen accident." Somehow the code arose, but was then strongly maintained and unevolvable.

But then there is Observation #2. The code has been found to be not mundane, but special. This falsified the "frozen accident" characterization, as the code is clearly not an accident. It also caused a monumental problem. While evolutionists could understand Observation #1, the universality of the code, as a consequence of the code being at a fitness local minimum, Observation #2 tells us that the code would not have just luckily been constructed at its present design.

If evolution somehow created a code to begin with, it would be at some random starting point. Evolution would have no a priori knowledge of the fitness landscape. There is a large number of possible codes, so it would be incredibly lucky for evolution's starting point to be anywhere near the special, canonical code we observe today. There would be an enormous evolutionary distance to travel between an initial random starting point, and the code we observe.

And yet there is not even so much as a trace of such a monumental evolutionary process. This would be an incredible convergence. In biology, when we see convergence, we usually also see variety. The mammalian and cephalopod eyes are considered to be convergent, but they also have fundamental differences. And in other species, there are all kinds of different vision systems. The idea that the universal DNA code is the result of convergence would be very suspect. Why are there no other canonical codes found? Why are there not more variants of the code? To have that much evolutionary distance covered, and converge with that level of precision would very strange.

And of course, in addition to this strange absence of any evidence of such a monumental evolutionary process, there is the problem described above with evolving the code to begin with. The code's fitness landscape is rugged and loaded with many local minima. Making much progress at all in evolving the code would be difficult.

But then there is Observation #3. Not only do we not see traces of the required monumental process of evolving the code across a great distance, and not only would this process be almost immediately halted by the many local minima in the fitness landscape, but what fitness improvements could actually be realized would not likely be selected for because said improvements rarely actually confer their benefit.

While these problems are obviously daunting, we have so far taken yet another tremendous problem for granted: the creation of the initial code, as a starting point.

We have discussed above the many problems with evolving today's canonical code from some starting point, all the while allowing for such a starting point simply to magically appear. But that, alone, is a big problem for evolution. The evolution of any code, even a simple code, from no code, is a tremendous problem.

Finally, a possible explanation for these several and significant problems to the evolution of the DNA code is the hypothesis that the code did not actually evolve so much as construct. Just as the right sequence of amino acids will inevitably fold into a functional protein, so too perhaps the DNA code simply is the consequence of biochemical interactions and reactions. In this sense the code would not evolve from random mutations, but rather would be inevitable. In that case, there would be no lengthy evolutionary pathway to traverse.

Now I don't want to give the impression that this hypothesis is mature or fleshed out. It is extremely speculative. But there is another, more significant, problem with it: It is not evolution.

If true, this hypothesis would confirm design. In other words, a chemically determined pathway, which as such is written into the very fabric of matter and nature's laws, would not only be profound but teleological. The DNA code would be built into biochemistry.

And given Observation #2, it is a very special, unique, detailed, code that would be built into biochemistry. It would not merely be a mundane code that happened to be enabled or determined by biochemistry, but essentially an optimized code. Long live Aristotle.

The problem is there simply is no free lunch. Evolutionists can try to avoid the science, but there it is.

Nature's world wide web v. Darwin.

Evolutionist Recommends "Listening to Other Arguments," Except When It Comes to Evolution
David Klinghoffer


We may be on the third wave of a scientific revolution in biology. It may be so big, the story "no doubt has Ernst Mayr hyperventilating in his grave," thinks evolutionary biologist Nora Besansky of the University of Notre Dame. Mayr influenced a generation of evolutionists. Is one of his core Darwinian concepts unraveling? In Science Magazine, Elizabeth Pennisi sets the stage:

Most of those who studied animals had instead bought into the argument by the famous mid-20th century evolutionary biologist Ernst Mayr that the formation of a new species requires reproductive isolation. Mayr and his contemporaries thought that the offspring of any hybrids would be less fit or even infertile, and would not persist. To be sure, captive animals could be interbred: Breeders crossed the African serval cat with domestic cats to produce the Savannah cat, and the Asian leopard cat with domestic breeds to produce the Bengal cat. There's even a "liger," the result of a zoo mating of a tiger and a lion. But like male mules, male ligers are sterile, supporting the notion that in nature, hybridization is mostly a dead end. [Emphasis added.]
Indeed, the biological concept of "species" practically requires reproductive isolation. Hybridization, while known since ancient civilizations bred mules, seems unnatural and rare. It played little role in classical Darwinian theory, which relies on emergent variation and selection for the origin of species. According to hybridization specialist Eugene M. McCarthy in "Darwin's Assessment of Hybridization," "Darwin did come to attribute more significance to hybridization in his later years," but it never gained significant traction in any edition of the Origin, his most widely read book. "Certainly such ideas were never canonized among the dogmas of neo-Darwinian theory."

For Darwin's branching tree-of-life diagram to work, innovations must be passed along in ancestor-descendent relationships, moving vertically up the branches over time by inheritance of chance mutations. Hybrids interfere with this picture by allowing branches to share genetic information horizontally all at once. And if the branches can re-join by back-crossing, the tree metaphor becomes more like a net. Pennisi understands the challenge to Darwinism in her title, "Shaking Up the Tree of Life," when she says, "Species were once thought to keep to themselves. Now, hybrids are turning up everywhere, challenging evolutionary theory."

The revolution has come in three waves. The first involved microbes, when horizontal gene transfer (HGT), sometimes called lateral gene transfer (LGT), was found to be common (see Denyse O'Leary's article last year, "Horizontal Gene Transfer: Sorry, Darwin, It's Not Your Evolution Any More"). HGT doesn't just complicate efforts to construct phylogenetic trees, she says; "because where HGT is in play, there just isn't a tree of life." In another Evolution News article, Paul Nelson cites Woese, Koonin and other evolutionists going out on a limb to dispute the existence of a universal tree of life -- at least when it comes to the origin of the three kingdoms of microbes.

The second wave involved plants. As far back as 1949, Pennisi says, it was a radical idea to suggest that plant species shared genes via hybridization. Botanists grew to accept the idea, but zoologists resisted it:

In 1949, botanist Edgar Anderson suggested that plants could take on genes from other species through hybridization and back crosses, where the hybrid mates with the parent species. He based this then-radical proposal on genetic crosses and morphological studies of flowering plants and ferns suggesting mixtures of genes from different species in individual genomes. Five years later, with fellow botanist G. Ledyard Stebbins, he argued such gene exchange could lead to new plant species. Their ideas quickly hit home with other plant researchers, but not with zoologists. "There was a very different conventional view in botany than in zoology," Rieseberg says.
Now, the third wave is encompassing the rest of biology: animals. (This wave hits close to home, involving as it does the human lineage.) Starting in the 1990s, zoologists began seeing hybridization as more than a breeder's trick. Pennisi gives three examples of the growing realization that natural hybridization contributes to speciation in animals, too.

Darwin's finches: Peter and Rosemary Grant witnessed a hybrid finch establishing its own population, with its own phenotype, in its own ecological niche. Pennisi tells the story of "Big Bird" in a separate Science Magazine article.

Butterflies: James Mallet's work on Ecuadorian butterflies a decade ago, borrowing on earlier work by Larry Gilbert, proved that more than 30% of Heliconius species formed hybrids, "swapping wing patterns and sometimes generating entirely new ones."

Neandertals: "In 2010, a comparison between the genomes of a Neandertal and people today settled what anthropologists and geneticists had debated for decades: Our ancestors had indeed mated with their archaic cousins, producing hybrid children," Pennisi says in the lead story. "They, in turn, had mated with other modern humans, leaving their distant descendants -- us -- with a permanent Neandertal legacy. Not long afterward, DNA from another archaic human population, the Denisovans, also showed up in the modern human genome, telling a similar story."

Finding hybridization in the human lineage "created a shock wave," Pennisi says. She quotes Malcolm Arnold whose imagination was captured by this important but long overlooked aspect of inheritance. "That genomic information overturned the assumption that everyone had." Pennisi helps us consider the implications for evolutionary theory:

The techniques that revealed the Neandertal and Denisovan legacy in our own genome are now making it possible to peer into the genomic histories of many organisms to check for interbreeding. The result: "Almost every genome study where people use sensitive techniques for detecting hybridization, we find [it] -- we are finding hybridization events where no one expected them," says Loren Reiseberg, an evolutionary biologist at the University of British Columbia in Vancouver, Canada.
All these data belie the common idea that animal species can't hybridize or, if they do, will produce inferior or infertile offspring -- think mules. Such reproductive isolation is part of the classic definition of a species. But many animals, it is now clear, violate that rule: Not only do they mate with related species, but hybrid descendants are fertile enough to contribute DNA back to a parental species -- a process called introgression.

The revolution was slow in coming till rapid genomic sequencing techniques became available. Now, with a plenitude of sequences published, what biologists had come to accept in microbes is forcing them to reconsider what they thought they knew about evolution for the entire tree of life. Pennisi all but announces the revolution:

Biologists long ago accepted that microbes can swap DNA, and they are now coming to terms with rampant gene flow among more complex creatures. "A large percent of the genome is free to move around," notes Chris Jiggins, an evolutionary biologist at the University of Cambridge in the United Kingdom. This "really challenges our concept of what a species is." As a result, where biologists once envisioned a tree of life, its branches forever distinct, many now see an interconnected web.
Hybridization, says Mallet, "has become big news and there's no escaping it."

The tree metaphor is being replaced with a net or web. That's the point where Pennisi describes Ernst Mayr, Darwin's paramount tree gardener, hyperventilating in his grave. In a new world of rampant hybridization and introgression, what is to become of neo-Darwinism? Pennisi gives a glimpse of the implications, hinting at a revolutionary new view of the origin of species. Putting a happy face on the revolution, she ends this way:

The Grants believe that complete reproductive isolation is outdated as a definition of a species. They have speculated that when a species is no longer capable of exchanging genes with any other species, it loses evolutionary potential and may become more prone to extinction.
This idea has yet to be proven, and even Mallet concedes that biologists don't fully understand how hybridization and introgression drive evolution -- or how to reconcile these processes with the traditional picture of species diversifying and diverging over time. Yet for him and for others, these are heady times. "It's the world of hybrids," Rieseberg says. "And that's wonderful."

It will certainly be wonderful for intelligent design theorists, but it's hard to see how Darwinians will cope with the revolution. Why? Because HGT and hybridization involve the shuffling of pre-existing genetic information, not the origin of new genetic information. Information isn't emerging by accidental mutations; it is being shared in a biological World Wide Web! Pennisi suggests this may be advantageous:

As examples of hybridization have multiplied, so has evidence that, at least in nature, swapping DNA has its advantages. When one toxic butterfly species acquires a gene for warning coloration from another toxic species, both species benefit, as a single encounter with either species is now enough to teach predators to avoid both. Among canids, interbreeding with domestic dogs has given wolves in North America a variant of the gene for an immune protein called Β-defensin. The variant gives wolf-dog hybrids and their descendants a distinctive black pelt and better resistance to canine distemper, Wayne says. In Asia, wolf-dog matings may have helped Tibetan mastiffs cope with the thin air at high altitudes. And interspecies gene flow has apparently allowed insecticide resistance to spread among malaria-carrying mosquitoes and the black flies that transmit river blindness.
In each case, the beneficial genetic changes unfolded faster than they would have by the normal process of mutation, which often changes DNA just one base at a time. Given the ability of hybridization and introgression to speed adaptive changes, says Baird, "closing that door [with reproductive isolation] is not necessarily going to be a good thing for your long-term survival."

Think of the possibilities for design theorists. We can see strategies for robustness with information sharing, allowing animals to survive environmental perturbations or recharge damaged genomes, for instance. Indeed, all kinds of "wonderful" possibilities open up for exploring design when information sharing is available in the explanatory toolkit. New vistas for explaining symbioses, ecosystems, and variability emerge. Could some apparent "innovations" be loans from other species? How can the information-sharing biosphere inform practical applications for medicine?


In the wonderful new "world of hybrids," ID advocates can take the lead, breathing new life into biological explanations, while the neo-Darwinists hyperventilate to delay the inevitable.

Monday, 28 November 2016

On Russia's war on religious liberty III:The Watchtower Society's commentary.

International Experts Discredit Russia’s “Expert Analysis” in Identifying “Extremism”

This is Part 3 of a three-part series based on exclusive interviews with noted scholars of religion, politics, and sociology, as well as experts in Soviet and post-Soviet studies.

ST. PETERSBURG, Russia—Jehovah’s Witnesses and their literature have been subject to court-appointed analysis by the Center for Sociocultural Expert Studies in Moscow. One study was completed in August 2015 and was used as the basis for an ongoing case against the Witnesses’ New World Translation of the Holy Scriptures, while another study is pending.
  
Highly regarded experts inside and outside of Russia debunk these studies. One such scholar, Dr. Mark R. Elliott, founding editor of the East-West Church and Ministry Report, observes: “State-approved ‘expert’ witnesses on religious questions, including those who disapproved Jehovah’s Witnesses’ scriptures, typically lack expertise and credibility as they issue ill-founded ‘opinions’ on matters of faith.”
   
  Specifically addressing the Center for Sociocultural Expert Studies, Dr. Roman Lunkin, head of the Center for Religion and Society at the Institute of Europe, Russian Academy of Sciences in Moscow, notes that “not one of the experts has a degree in religious studies and they are not even familiar with the writings of Jehovah’s Witnesses. Their analysis included quotes that were taken from information provided by the Irenaeus of Lyon Centre, a radical Orthodox anti-cult organization known for opposing Jehovah’s Witnesses, as well as many other religions and denominations.”

  “Unfortunately, I would have to agree with Dr. Lunkin,” states Dr. Ekaterina Elbakyan, professor of sociology and management of social processes at the Moscow Academy of Labor and Social Relations. “It is true that in Russia today religious expert studies are often performed by people who are not specialists, and are made-to-order, so to speak, where an expert is not free to state his true findings.”

 Dr. Elbakyan, who participated in two trials in Taganrog and was present as a specialist-expert in the appellate court in Rostov-on-Don, further explains: “I saw with my own eyes the video material on the basis of which Jehovah’s Witnesses were charged with extremism. Twice I gave a detailed commentary in court explaining that this was a typical Christian religious service and had nothing to do with extremism, but the court did not take the expert opinion into consideration. It is impossible not to see this as a clear and systematic trend toward religious discrimination. As long as this trend continues, there are, of course, no guarantees that believers will cease to be classified as ‘extremists’ because of their beliefs.”

  International: David A. Semonian, Office of Public Information, 1-718-560-5000

Russia: Yaroslav Sivulskiy, 7-812-702-2691

Sub-optimal design or sub-optimal analysis?

Shoddy Engineering or Intelligent Design? Case of the Mouse's Eye

Richard Sternberg


We often hear from Darwinians that the biological world is replete with examples of shoddy engineering, or, as they prefer to put it, bad design. One such case of really poor construction is the inverted retina of the vertebrate eye. As we all know, the retina of our eyes is configured all wrong because the cells that gather photons, the rod photoreceptors, are behind two other tissue layers. Light first strikes the ganglion cells and then passes by or through the bipolar cells before reaching the rod photoreceptors. Surely, a child could have arranged the system better -- so they tell us.

The problem with this story of supposed unintelligent design is that it is long on anthropomorphisms and short on evidence. Consider nocturnal mammals. Night vision for, say, a mouse is no small feat. Light intensities during night can be a million times less than those of the day, so the rod cells must be optimized -- yes, optimized -- to capture even the few stray photons that strike them. Given the backwards organization of the mouse's retina, how is this scavenging of light accomplished? Part of the solution is that the ganglion and bipolar cell layers are thinner in mammals that are nocturnal. But other optimizations must also occur. Enter the cell nucleus and "junk" DNA.

Only around 1.5 percent of mammalian DNA encodes proteins. Since it has become lore to equate protein-coding regions of the genome with "genes" and "information," the remaining approximately 98.5 percent of DNA has been dismissed as junk. Yet, for what is purported to be mere genetic gibberish, it is strikingly ordered along the length of the chromosome. Like the barcodes on consumer items that we are all familiar with, each chromosome has a particular banding pattern. This pattern reflects how different types of DNA sequences are linearly distributed. The "core" of a mammalian chromosome, the centromere, and the genomic segments that frame it largely consist of long tracks of species-specific repetitive elements -- these areas give rise to "C-bands" after a chemical stain has been applied. Then, alternating along the chromosome arms are two other kinds of bands that appear after different staining procedures. One called "R-bands" is rich in protein-coding genes and a particular class of retrotransposon called SINEs (for Short Interspersed Nuclear Elements). SINE sequence families are restricted to certain taxonomic groups. The other is termed "G-bands" and it has a high concentration of another class of retrotransposon called LINEs (for Long Interspersed Nuclear Elements), that can also be used to distinguish between species. Finally, the ends of the chromosome, telomeres, are comprised of a completely different set of repetitive DNA sequences.

In general, C-bands and G-bands are complexed with proteins and RNAs to give a more compact organization called heterochromatin, whereas R-bands have a more open conformation referred to as euchromatin.

Why bother with such details? Well, each of these chromosome bands has a preferred location in the cell nucleus. Open any good textbook on mammalian anatomy and you will note that cell types can often be distinguished by the shape and size of the nucleus, as well as the positions of euchromatin and heterochromatin in that organelle. Nevertheless, most cell nuclei follow a general rule where euchromatin is located in the interior, in various compartments that are dense with transcription factories, RNA processing machinery, and many other components. Heterochromatin, on the other hand, is found mainly around the periphery of the nucleus. A striking exception to this principle is found in the nuclei of rod cells in nocturnal mammals.

Reporting in the journal Cell, Irina Solovei and coworkers have just discovered that, in contrast to the nucleus organization seen in ganglion and bipolar cells of the retina, a remarkable inversion of chromosome band localities occurs in the rod photoreceptors of mammals with night vision (Solovei I, Kreysing M, Lanctôt C, Kösem S, Peichl L, Cremer T, Guck J, Joffe B. 2009. "Nuclear Architecture of Rod Photoreceptor Cells Adapts to Vision in Mammalian Evolution." Cell 137(2): 356-368). First, the C-bands of all the chromosomes including the centromere coalesce in the center of the nucleus to produce a dense chromocenter. Keep in mind that the DNA backbone of this chromocenter in different mammals is repetitive and highly species-specific. Second, a shell of LINE-rich G-band sequences surrounds the C-bands. Finally, the R-bands including all examined protein-coding genes are placed next to the nuclear envelope. The nucleus of this cell type is also smaller so as to make the pattern more compact. This ordered movement of billions of basepairs according to their "barcode status" begins in the rod photoreceptor cells at birth, at least in the mouse, and continues for weeks and months.

Why the elaborate repositioning of so much "junk" DNA in the rod cells of nocturnal mammals? The answer is optics. A central cluster of chromocenters surrounded by a layer of LINE-dense heterochromatin enables the nucleus to be a converging lens for photons, so that the latter can pass without hindrance to the rod outer segments that sense light. In other words, the genome regions with the highest refractive index -- undoubtedly enhanced by the proteins bound to the repetitive DNA -- are concentrated in the interior, followed by the sequences with the next highest level of refractivity, to prevent against the scattering of light. The nuclear genome is thus transformed into an optical device that is designed to assist in the capturing of photons. This chromatin-based convex (focusing) lens is so well constructed that it still works when lattices of rod cells are made to be disordered. Normal cell nuclei actually scatter light.

So the next time someone tells you that it "strains credulity" to think that more than a few pieces of "junk DNA" could be functional in the cell -- that the data only point to the lack of design and suboptimality -- remind them of the rod cell nuclei of the humble mouse.

Trying to reduce the irreducible?

Refuting Behe's Critics, Meyer Gives Four Reasons the Flagellum Predates the Type III Secretory System

David Klinghoffer 




Michael Behe's signature argument in Darwin's Black Box 
 would be seriously bruised if it turned out the bacterial flagellar motor had a simpler evolutionary antecedent. Critics of intelligent design thought they had identified such a precursor in the form of the Type III Secretory System, found in some bacteria.

Behe and others have since shown why it's far likelier that the flagellum is the precursor, thus leaving Dr. Behe's argument intact. In response, the critics either simply repeat their claim as if it hadn't been refuted, or they go silent -- an implicit admission they were wrong, and Behe was right.

How exactly do we know the flagellum came first? In a 12-minute video discussion, Stephen Meyer explains that we know this for four good and independent reasons. Watch for yourself, and if you're still not convinced, let me know why not. (Reach me by clicking on the orange EMAIL US button at the top of this page.)

Mike Behe's case for ID from irreducible complexity has stood the test of fire by scientists and others whose picture of reality depends on denying that biology bears evidence of design. We are celebrating the 20th anniversary of Darwin's Black Box with a new hour-long documentary written and directed by John West, Revolutionary: Michael Behe and the Mystery of Molecular Machines
 . Get your copy of Revolutionary, on DVD or Blu-ray, today.