Search This Blog

Friday, 2 December 2016

Seeking the edge of Darwinism.

Best of Behe: Waiting Longer for Two Mutations
Michael Behe

Editor's note: In celebration of the 20th anniversary of biochemist Michael Behe's pathbreaking book  Darwin's Black Box and the release of the new documentary Revolutionary: Michael Behe and the Mystery of Molecular Machines , we are highlighting some of Behe's "greatest hits." The following was published by Discovery Institute on March 20, 2009. Remember to get your copy of  Revolutionary  now! See the trailer  here .


An interesting paper appeared in a 2008 issue of the journal Genetics, "Waiting for Two Mutations: With Applications to Regulatory Sequence Evolution and the Limits of Darwinian Evolution" (Durrett, R & Schmidt, D. 2008. Genetics 180: 1501-1509). As the title implies, it concerns the time one would have to wait for Darwinian processes to produce some helpful biological feature (here, regulatory sequences in DNA) if two mutations are required instead of just one. It is a theoretical paper, which uses models, math, and computer simulations to reach conclusions, rather than empirical data from field or lab experiments, as my book The Edge of Evolution does. The authors declare in the abstract of their manuscript that they aim "to expose flaws in some of Michael Behe's arguments concerning mathematical limits to Darwinian evolution." Unsurprisingly (bless their hearts), they pretty much do the exact opposite.

Since the journal Genetics publishes letters to the editors (most journals don't), I sent a reply to the journal. The original paper by Durrett and Schmidt can be found here, my response here, and their reply here.

In their paper, as I write in my reply:

They develop a population genetics model to estimate the waiting time for the occurrence of two mutations, one of which is premised to damage an existing transcription-factor-binding site, and the other of which creates a second, new binding site within the nearby region from a sequence that is already a near match with a binding site sequence (for example, 9 of 10 nucleotides already match).
The most novel point of their model is that, under some conditions, the number of organisms needed to get two mutations is proportional not to the inverse of the square of the point mutation rate (as it would be if both mutations had to appear simultaneously in the same organism), but to the inverse of the point mutation rate times the square root of the point mutation rate (because the first mutation would spread in the population before the second appeared, increasing the odds of getting a double mutation). To see what that means, consider that the point mutation rate is roughly one in a hundred million (1 in 10^8). So if two specific mutations had to occur at once, that would be an event of likelihood about 1 in 10^16. On the other hand, under some conditions they modeled, the likelihood would be about 1 in 10^12, ten thousand times more likely than the first situation. Durrett and Schmidt (2008) compare the number they got in their model to my literature citation1 that the probability of the development of chloroquine resistance in the malarial parasite is an event of order 1 in 10^20, and they remark that it "is 5 million times larger than the calculation we have just given." The implied conclusion is that I have greatly overstated the difficulty of getting two necessary mutations. Below I show that they are incorrect.

Serious Problems

Interesting as it is, there are some pretty serious problems in the way they applied their model to my arguments, some of which they owned up to in their reply, and some of which they didn't. When the problems are fixed, however, the resulting number is remarkably close to the empirical value of 1 in 10^20. I will go through the difficulties in turn.

The first problem was a simple oversight. They were modeling the mutation of a ten-nucleotide-long binding site for a regulatory protein in DNA, so they used a value for the mutation rate that was ten-times larger than the point mutation rate. However, in the chloroquine-resistance protein discussed in The Edge of Evolution, since particular amino acids have to be changed, the correct rate to use is the point mutation rate. That leads to an underestimate of a factor of about 30 in applying their model to the protein. As they wrote in their reply, "Behe is right on this point." I appreciate their agreement here.

The second problem has to do with their choice of model. In their original paper they actually developed models for two situations -- for when the first mutation is neutral, and for when it is deleterious. When they applied it to the chloroquine-resistance protein, they unfortunately decided to use the neutral model. However, it is very likely that the first protein mutation is deleterious. As I wrote discussing a hypothetical case in Chapter 6 of The Edge:

Suppose, however, that the first mutation wasn't a net plus; it was harmful. Only when both mutations occurred together was it beneficial. Then on average a person born with the mutation would leave fewer offspring than otherwise. The mutation would not increase in the population, and evolution would have to skip a step for it to take hold, because nature would need both necessary mutations at once.... The Darwinian magic works well only when intermediate steps are each better ('more fit') than preceding steps, so that the mutant gene increases in number in the population as natural selection favors the offspring of people who have it. Yet its usefulness quickly declines when intermediate steps are worse than earlier steps, and is pretty much worthless if several required intervening steps aren't improvements.
If the first mutation is indeed deleterious, then Durrett and Schmidt (2008) applied the wrong model to the chloroquine-resistance protein. In fact, if the parasite with the first mutation is only 10 percent as fit as the unmutated parasite, then the population-spreading effect they calculate for neutral mutations is pretty much eliminated, as their own model for deleterious mutations shows. What do the authors say in their response about this possibility? "We leave it to biologists to debate whether the first PfCRT mutation is that strongly deleterious." In other words, they don't know; it is outside their interest as mathematicians. (Again, I appreciate their candor in saying so.) Assuming that the first mutation is seriously deleterious, then their calculation is off by a factor of 10^4. In conjunction with the first mistake of 30-fold, their calculation so far is off by five-and-a-half orders of magnitude.

Making a String of Ones

The third problem also concerns the biology of the system. I'm at a bit of a loss here, because the problem is not hard to see, and yet in their reply they stoutly deny the mistake. In fact, they confidently assert it is I who am mistaken. I had written in my letter, ''... their model is incomplete on its own terms because it does not take into account the probability of one of the nine matching nucleotides in the region that is envisioned to become the new transcription-factor-binding site mutating to an incorrect nucleotide before the 10th mismatched codon mutates to the correct one.'' They retort, "This conclusion is simply wrong since it assumes that there is only one individual in the population with the first mutation." That's incorrect. Let me explain the problem in more detail.

Consider a string of ten digits, either 0 or 1. We start with a string that has nine 1's, and just one 0. We want to convert the single 0 to a 1 without switching any of the 1's to a zero. Suppose that the switch rate for each digit is one per hundred copies of the string. That is, we copy the string repeatedly, and, if we focus on a particular digit, about every hundredth copy or so that digit has changed. Okay, now cover all of the numbers of the string except the 0, and let a random, automated procedure copy the string, with a digit-mutation rate of one in a hundred. After, say, 79 copies, we see that the visible 0 has just changed to a 1. Now we uncover the rest of the digits. What is the likelihood that one of them has changed in the meantime? Since all the digits have the same mutation rate, then there is a nine in ten chance that one of the other digits has already changed from a 1 to a 0, and our mutated string still does not match the target of all 1's. In fact, only about one time out of ten will we uncover the string and find that no other digits have changed except the visible digit. Thus the effective mutation rate for transforming the string with nine matches out of ten to a string with ten matches out of ten will be only one tenth of the basic digit-mutation rate. If the string is a hundred long, the effective mutation rate will be one-hundredth the basic rate, and so on. (This is very similar to the problem of mutating a duplicate gene to a new selectable function before it suffers a degradative mutation, which has been investigated by Lynch and co-workers.2

So, despite their self-assured tone, in fact on this point Durrett and Schmidt are "simply wrong." And, as I write in my letter, since the gene for the chloroquine resistance protein has on the order of a thousand nucleotides, rather than just the ten of Durrett and Schmidt's postulated regulatory sequence, the effective rate for the second mutation is several orders of magnitude less than they thought. Thus with the, say, two orders of magnitude mistake here, the factor of 30 error for the initial mutation rate, and the four orders of magnitude for mistakenly using a neutral model instead of a deleterious model, Durrett and Schmidt's calculation is a cumulative seven and a half orders of magnitude off. Since they had pointed out that their calculation was about five million-fold (about six and a half orders of magnitude) lower than the empirical result I cited, when their errors are corrected the calculation agrees pretty well with the empirical data.

An Irrelevant Example

Now I'd like to turn to a couple of other points in Durrett and Schmidt's reply that aren't mistakes with their model, but which do reflect conceptual errors. As I quote above, they state in their reply, "This conclusion is simply wrong since it assumes that there is only one individual in the population with the first mutation." I have shown above that, despite their assertion, my conclusion is right. But where do they get the idea that "it assumes that there is only one individual in the population with the first mutation"? I wrote no such thing in my letter about "one individual." Furthermore, I "assumed" nothing. I merely cited empirical results from the literature. The figure of 1 in 10^20 is a citation from the literature on chloroquine resistance of malaria. Unlike their model, it is not a calculation on my part.

Right after this, in their reply Durrett and Schmidt say that the "mistake" I made is a common one, and they go on to illustrate "my" mistake with an example about a lottery winner. Yet their own example shows they are seriously confused about what is going on. They write:

When Evelyn Adams won the New Jersey lottery on October 23, 1985, and again on February 13, 1986, newspapers quoted odds of 17.1 trillion to 1. That assumes that the winning person and the two lottery dates are specified in advance, but at any point in time there is a population of individuals who have won the lottery and have a chance to win again, and there are many possible pairs of dates on which this event can happen.... The probability that it happens in one lottery 1 year is ~1 in 200.
No kidding. If one has millions of players, and any of the millions could win twice on any two dates, then the odds are certainly much better that somebody will win on some two dates then that Evelyn Adams win on October 23, 1985 and February 13, 1986. But that has absolutely nothing to do with the question of changing a correct nucleotide to an incorrect one before changing an incorrect one to a correct one, which is the context in which this odd digression appears. What's more, it is not the type of situation that Durrett and Schmidt themselves modeled. They asked the question, given a particular ten-base-pair regulatory sequence, and a particular sequence that is matched in nine of ten sites to the regulatory sequence, how long will it take to mutate the particular regulatory sequence, destroying it, and then mutate the particular near-match sequence to a perfect-match sequence? What's even more, it is not the situation that pertains in chloroquine resistance in malaria. There several particular amino acid residues in a particular protein (PfCRT) have to mutate to yield effective resistance. It seems to me that the lottery example must be a favorite of Durrett and Schmidt's, and that they were determined to use it whether it fit the situation or not.

Multiplying Resources

The final conceptual error that Durrett and Schmidt commit is the gratuitous multiplication of probabilistic resources. In their original paper they calculated that the appearance of a particular double mutation in humans would have an expected time of appearance of 216 million years, if one were considering a one kilobase region of the genome. Since the evolution of humans from other primates took much less time than that, Durrett and Schmidt observed that if the DNA "neighborhood" were a thousand times larger, then lots of correct regulatory sites would already be expected to be there. But, then, exactly what is the model? And if the relevant neighborhood is much larger, why did they model a smaller neighborhood? Is there some biological fact they neglected to cite that justified the thousand-fold expansion of what constitutes a "neighborhood," or were they just trying to squeeze their results post-hoc into what a priori was thought to be a reasonable time frame?

When I pointed this out in my letter, Durrett and Schmidt did not address the problem. Rather, they upped the stakes. They write in their reply, "there are at least 20,000 genes in the human genome and for each gene tens if not hundreds of pairs of mutations that can occur in each one." The implication is that there are very, very many ways to get two mutations. Well, if that were indeed the case, why did they model a situation where two particular mutations -- not just any two -- were needed? Why didn't they model the situation where any two mutations in any of 20,000 genes would suffice? In fact, since that would give a very much shorter time span, why did the journal Genetics and the reviewers of the paper let them get away with such a miscalculation?

The answer of course is that in almost any particular situation, almost all possible double mutations (and single mutations and triple mutations and so on) will be useless. Consider the chloroquine-resistance mutation in malaria. There are about 10^6 possible single amino acid mutations in malarial parasite proteins, and 10^12 possible double amino acid mutations (where the changes could be in any two proteins). Yet only a handful are known to be useful to the parasite in fending off the antibiotic, and only one is very effective -- the multiple changes in PfCRT. It would be silly to think that just any two mutations would help. The vast majority are completely ineffective. Nonetheless, it is a common conceptual mistake to naively multiply postulated "helpful mutations" when the numbers initially show too few.

A Very Important Point

Here's a final important point. Genetics is an excellent journal; its editors and reviewers are top notch; and Durrett and Schmidt themselves are fine researchers. Yet, as I show above, when simple mistakes in the application of their model to malaria are corrected, it agrees closely with empirical results reported from the field that I cited. This is very strong support that the central contention of The Edge of Evolution is correct: that it is an extremely difficult evolutionary task for multiple required mutations to occur through Darwinian means, especially if one of the mutations is deleterious. And, as I argue in the book, reasonable application of this point to the protein machinery of the cell makes it very unlikely that life developed through a Darwinian mechanism.

References:

(1) White, N. J., 2004 Antimalarial drug resistance. J. Clin. Invest. 113: 1084-1092.


(2) Lynch, M. and Conery, J.S. 2000. The evolutionary fate and consequences of duplicate genes. Science 290: 1151-1155.


Sniping from the dark?

The Evolutionary Argument from Ignorance

Cornelius Hunter 


Yesterday I looked at the  enormous problems
 that the DNA, or genetic, code pose for evolutionary theory. 
 Here
, previously noted at ,
Evolution News
 is a paper that seems to have come to the same conclusion. The authors argue that the underlying patterns of the genetic code are not likely to be due to "chance coupled with presumable evolutionary pathways" (P-value < 10^-13), and conclude that they are "essentially irreducible to any natural origin."

A common response from evolutionists, when presented with evidence such as this, is that we still don't understand biology very well. This argument from ignorance goes all the way back to Darwin. He used it in Chapter 6 of the Origin to discard the problem of evolving the electric organs in fish, such as the electric eel (which isn't actually an eel). The Sage from Kent agreed that it is "impossible to conceive by what steps these wondrous organs" evolved, but that was OK, because "we do not even know of what use they are."

Setting aside the fact that Darwin's argument from ignorance was a non-scientific fallacy; it also was a set up for failure. For now, a century and half later, we do know "what use they are." And it has just  gotten worse 
 for evolution.

It is another demonstration that arguments from ignorance, aside from being terrible arguments, are not good science. The truth is, when evolutionists today claim that the many problems with their chance theory are due to a lack of knowledge, they are throwing up a smoke screen.

Wednesday, 30 November 2016

A Whale of a problem for Darwinism II

Using I.D to disprove I.D.

"What about evolution is random and what is not?"
Robert Crowther

Here's another one for my "you can't make this stuff up" file. I kid you not, this is a news story about a new peer-reviewed paper in PLoS Biology by Brian Paegel and Gerald Joyce of The Scripps Research Institute which explains that (all emphasis from here on is mine)

they have produced a computer-controlled system that can drive the evolution of improved RNA enzymes.
I couldn't write a funnier script if I tried. Sadly, these guys just don't get the joke.

The evolution of molecules via scientific experiment is not new. The first RNA enzymes to be "evolved" in the lab were generated in the 1990s. But what is exciting about this work is that the process has been made automatic. Thus evolution is directed by a machine without requiring human intervention-other then providing the initial ingredients and switching the machine on.
But wait it gets better.
Throughout the process, the evolution-machine can propagate the reaction itself, because whenever the enzyme population size reaches a predetermined level, the machine removes a fraction of the population and replaces the starting chemicals needed for the reaction to continue.
What? Predetermined? Predetermined by whom or by what? Oh, the evolution machine, which itself is a result of intelligent agency.
The authors sum it all up very nicely.


This beautifully illustrates what about evolution is random and what is not.

Missing links v. Darwin.

Billions of Missing Links: Hen's Eggs

Geoffrey Simmons 


Note: This is one of a series of posts excerpted from my book,  Billions of Missing Links: A Rational Look at the Mysteries Evolution Can't Explain
.

When it comes to citing examples of purposeful design, nearly every author likes to point out the hen's egg. It's really quite remarkable. Despite having a shell that is a mere 0.35 mm think, they don't break when a parent sits on them. According to Dr. Knut Schmidt-Nielsen,

A bird egg is a mechanical structure strong enough to hold a chick securely during development, yet weak enough to break out of. The shell must let oxygen in and carbon dioxide out, yet be sufficiently impermeable to water to keep the contents from drying out.
Under microscopy, one can see the shell is a foamlike structure that resists cracking. Gases and water pass through 10,000 pores that average 17 micrometers in diameter. Ultimately, 6 liters of oxygen will have been taken in and 4.5 liters of carbon dioxide given off. The yolk is its food. All life support systems are self-contained, like a space shuttle.
All hen's eggs are ready to hatch on the twenty-first day. Every day is precisely preprogrammed. The heart starts beating on the sixth day. On the nineteenth day the embryo uses its egg tooth to puncture the air sac (beneath the flat end) and then takes two days to crack through the shell.

Giving natural selection a hand?

The Rest of the Story -- Eugenics, Racism, Darwinism

Sarah Chaffee 



According to its most ardent proponents, a widespread embrace of evolutionary theory is a big win-win not only for science but for culture and ethics. Our recent report "Darwin's Corrosive Idea" handily dispels that rosy picture as it pertains to the present day. As for history, Jason Jones and John Zmirak writing at The Stream helpfully remind readers of the link between eugenics, racism, and Darwinism.

Their specific topic is Margaret Sanger and the documentary Maafa 21: Black Genocide. Here's what they say about Darwin and how his arguments were used to justify eugenics:

The eugenicists' arrogant certainty that, because they had inherited money and power, they were genetically superior to the rest of the human race, found in Charles Darwin's theories an ideal pretext and a program: to take the survival of the fittest and make it happen faster, by stopping the "unfit" from breeding. The goal, in Margaret Sanger's own words, was "More Children from the Fit, Fewer from the Unfit." Instead of seeing the poor as victims of injustice or targets for Christian charity, the materialism these elitists took from Darwin assured them that the poor were themselves the problem -- that they were inferior, deficient and dangerous down to the marrow of their bones.

The authors note that the eugenics movement itself was undergirded by racism. The video Maafa 21, they note, links the rise of eugenics to white anxiety about the "negro problem" following the end of the Civil War.

In his book Darwin Day in America, Center for Science & Culture associate director John West has written extensively about the social damage linked to Darwinism.

Jones and Zmirak bring up some harrowing examples, among them the observation that Sanger's friend Lothrop Stoddard was a leader in the Massachusetts Klu Klux Klan and wrote a book Hitler called his "bible." A speaker Sanger invited to a population conference, Eugen Fisher, had operated a concentration camp in Africa imprisoning natives. Jones and Zimrak note, "It was Fischer's book on eugenics, which Hitler had read in prison, that convinced Hitler of its central importance." For more historical background, read historian Richard Weikart's books including his most recent, Hitler's Religion.

They say that history is written by the victors. With evolutionary theory holding sway in the media and academia, it's little wonder we rarely hear about these connections and events.

Life's machine code v. Darwin.

My Dear Watson: Four Observations on the DNA Code and Evolution

Cornelius Hunter 


The DNA code is used in cells to translate a sequence of nucleotides into a sequence of amino acids, which then make up a protein. In the past fifty years we have learned four important things about the code:

1. The DNA code is universal. There are minor variations scattered about, but the same canonical code is found across the species.

2. The DNA code is special. The DNA is not just some random, off the shelf, code. It has unique properties that, for example, make the translation process more robust to mutations. The code has been called "one in a million
 ," but it probably is even more special than that. One 
 study
  found that the code optimizes "a combination of several different functions simultaneously."

3. Some of the special properties of the DNA code only rarely confer benefit. Many of the code's special properties deal with rare mutation events. If such properties could arise via random mutation in an individual organism, their benefit would not be common.

4. The DNA code's fitness landscape has dependencies on the DNA coding sequences and so favors stasis. Changes in the DNA code may well wreak havoc as the DNA coding sequences are suddenly not interpreted correctly. So the fitness landscape, at any given location in the code design space, is not only rugged but often is a local minimum, thus freezing evolution at that code.

Observation #1 above, according to evolutionary theory, means that the code is the ultimate homology and must have been present in the last universal common ancestor (LUCA). There was essentially zero evolution of the code allowed over the course of billions of years.

This code stasis can be understood, from an evolutionary perspective, using Observation #4. Given the many dependencies on the DNA coding sequences, the code can be understood to be at a local minimum and so impossible to evolve.

Hence Francis Crick's characterization, and subsequent promotion by later evolutionists, of the code as a "frozen accident." Somehow the code arose, but was then strongly maintained and unevolvable.

But then there is Observation #2. The code has been found to be not mundane, but special. This falsified the "frozen accident" characterization, as the code is clearly not an accident. It also caused a monumental problem. While evolutionists could understand Observation #1, the universality of the code, as a consequence of the code being at a fitness local minimum, Observation #2 tells us that the code would not have just luckily been constructed at its present design.

If evolution somehow created a code to begin with, it would be at some random starting point. Evolution would have no a priori knowledge of the fitness landscape. There is a large number of possible codes, so it would be incredibly lucky for evolution's starting point to be anywhere near the special, canonical code we observe today. There would be an enormous evolutionary distance to travel between an initial random starting point, and the code we observe.

And yet there is not even so much as a trace of such a monumental evolutionary process. This would be an incredible convergence. In biology, when we see convergence, we usually also see variety. The mammalian and cephalopod eyes are considered to be convergent, but they also have fundamental differences. And in other species, there are all kinds of different vision systems. The idea that the universal DNA code is the result of convergence would be very suspect. Why are there no other canonical codes found? Why are there not more variants of the code? To have that much evolutionary distance covered, and converge with that level of precision would very strange.

And of course, in addition to this strange absence of any evidence of such a monumental evolutionary process, there is the problem described above with evolving the code to begin with. The code's fitness landscape is rugged and loaded with many local minima. Making much progress at all in evolving the code would be difficult.

But then there is Observation #3. Not only do we not see traces of the required monumental process of evolving the code across a great distance, and not only would this process be almost immediately halted by the many local minima in the fitness landscape, but what fitness improvements could actually be realized would not likely be selected for because said improvements rarely actually confer their benefit.

While these problems are obviously daunting, we have so far taken yet another tremendous problem for granted: the creation of the initial code, as a starting point.

We have discussed above the many problems with evolving today's canonical code from some starting point, all the while allowing for such a starting point simply to magically appear. But that, alone, is a big problem for evolution. The evolution of any code, even a simple code, from no code, is a tremendous problem.

Finally, a possible explanation for these several and significant problems to the evolution of the DNA code is the hypothesis that the code did not actually evolve so much as construct. Just as the right sequence of amino acids will inevitably fold into a functional protein, so too perhaps the DNA code simply is the consequence of biochemical interactions and reactions. In this sense the code would not evolve from random mutations, but rather would be inevitable. In that case, there would be no lengthy evolutionary pathway to traverse.

Now I don't want to give the impression that this hypothesis is mature or fleshed out. It is extremely speculative. But there is another, more significant, problem with it: It is not evolution.

If true, this hypothesis would confirm design. In other words, a chemically determined pathway, which as such is written into the very fabric of matter and nature's laws, would not only be profound but teleological. The DNA code would be built into biochemistry.

And given Observation #2, it is a very special, unique, detailed, code that would be built into biochemistry. It would not merely be a mundane code that happened to be enabled or determined by biochemistry, but essentially an optimized code. Long live Aristotle.

The problem is there simply is no free lunch. Evolutionists can try to avoid the science, but there it is.

Nature's world wide web v. Darwin.

Evolutionist Recommends "Listening to Other Arguments," Except When It Comes to Evolution
David Klinghoffer


We may be on the third wave of a scientific revolution in biology. It may be so big, the story "no doubt has Ernst Mayr hyperventilating in his grave," thinks evolutionary biologist Nora Besansky of the University of Notre Dame. Mayr influenced a generation of evolutionists. Is one of his core Darwinian concepts unraveling? In Science Magazine, Elizabeth Pennisi sets the stage:

Most of those who studied animals had instead bought into the argument by the famous mid-20th century evolutionary biologist Ernst Mayr that the formation of a new species requires reproductive isolation. Mayr and his contemporaries thought that the offspring of any hybrids would be less fit or even infertile, and would not persist. To be sure, captive animals could be interbred: Breeders crossed the African serval cat with domestic cats to produce the Savannah cat, and the Asian leopard cat with domestic breeds to produce the Bengal cat. There's even a "liger," the result of a zoo mating of a tiger and a lion. But like male mules, male ligers are sterile, supporting the notion that in nature, hybridization is mostly a dead end. [Emphasis added.]
Indeed, the biological concept of "species" practically requires reproductive isolation. Hybridization, while known since ancient civilizations bred mules, seems unnatural and rare. It played little role in classical Darwinian theory, which relies on emergent variation and selection for the origin of species. According to hybridization specialist Eugene M. McCarthy in "Darwin's Assessment of Hybridization," "Darwin did come to attribute more significance to hybridization in his later years," but it never gained significant traction in any edition of the Origin, his most widely read book. "Certainly such ideas were never canonized among the dogmas of neo-Darwinian theory."

For Darwin's branching tree-of-life diagram to work, innovations must be passed along in ancestor-descendent relationships, moving vertically up the branches over time by inheritance of chance mutations. Hybrids interfere with this picture by allowing branches to share genetic information horizontally all at once. And if the branches can re-join by back-crossing, the tree metaphor becomes more like a net. Pennisi understands the challenge to Darwinism in her title, "Shaking Up the Tree of Life," when she says, "Species were once thought to keep to themselves. Now, hybrids are turning up everywhere, challenging evolutionary theory."

The revolution has come in three waves. The first involved microbes, when horizontal gene transfer (HGT), sometimes called lateral gene transfer (LGT), was found to be common (see Denyse O'Leary's article last year, "Horizontal Gene Transfer: Sorry, Darwin, It's Not Your Evolution Any More"). HGT doesn't just complicate efforts to construct phylogenetic trees, she says; "because where HGT is in play, there just isn't a tree of life." In another Evolution News article, Paul Nelson cites Woese, Koonin and other evolutionists going out on a limb to dispute the existence of a universal tree of life -- at least when it comes to the origin of the three kingdoms of microbes.

The second wave involved plants. As far back as 1949, Pennisi says, it was a radical idea to suggest that plant species shared genes via hybridization. Botanists grew to accept the idea, but zoologists resisted it:

In 1949, botanist Edgar Anderson suggested that plants could take on genes from other species through hybridization and back crosses, where the hybrid mates with the parent species. He based this then-radical proposal on genetic crosses and morphological studies of flowering plants and ferns suggesting mixtures of genes from different species in individual genomes. Five years later, with fellow botanist G. Ledyard Stebbins, he argued such gene exchange could lead to new plant species. Their ideas quickly hit home with other plant researchers, but not with zoologists. "There was a very different conventional view in botany than in zoology," Rieseberg says.
Now, the third wave is encompassing the rest of biology: animals. (This wave hits close to home, involving as it does the human lineage.) Starting in the 1990s, zoologists began seeing hybridization as more than a breeder's trick. Pennisi gives three examples of the growing realization that natural hybridization contributes to speciation in animals, too.

Darwin's finches: Peter and Rosemary Grant witnessed a hybrid finch establishing its own population, with its own phenotype, in its own ecological niche. Pennisi tells the story of "Big Bird" in a separate Science Magazine article.

Butterflies: James Mallet's work on Ecuadorian butterflies a decade ago, borrowing on earlier work by Larry Gilbert, proved that more than 30% of Heliconius species formed hybrids, "swapping wing patterns and sometimes generating entirely new ones."

Neandertals: "In 2010, a comparison between the genomes of a Neandertal and people today settled what anthropologists and geneticists had debated for decades: Our ancestors had indeed mated with their archaic cousins, producing hybrid children," Pennisi says in the lead story. "They, in turn, had mated with other modern humans, leaving their distant descendants -- us -- with a permanent Neandertal legacy. Not long afterward, DNA from another archaic human population, the Denisovans, also showed up in the modern human genome, telling a similar story."

Finding hybridization in the human lineage "created a shock wave," Pennisi says. She quotes Malcolm Arnold whose imagination was captured by this important but long overlooked aspect of inheritance. "That genomic information overturned the assumption that everyone had." Pennisi helps us consider the implications for evolutionary theory:

The techniques that revealed the Neandertal and Denisovan legacy in our own genome are now making it possible to peer into the genomic histories of many organisms to check for interbreeding. The result: "Almost every genome study where people use sensitive techniques for detecting hybridization, we find [it] -- we are finding hybridization events where no one expected them," says Loren Reiseberg, an evolutionary biologist at the University of British Columbia in Vancouver, Canada.
All these data belie the common idea that animal species can't hybridize or, if they do, will produce inferior or infertile offspring -- think mules. Such reproductive isolation is part of the classic definition of a species. But many animals, it is now clear, violate that rule: Not only do they mate with related species, but hybrid descendants are fertile enough to contribute DNA back to a parental species -- a process called introgression.

The revolution was slow in coming till rapid genomic sequencing techniques became available. Now, with a plenitude of sequences published, what biologists had come to accept in microbes is forcing them to reconsider what they thought they knew about evolution for the entire tree of life. Pennisi all but announces the revolution:

Biologists long ago accepted that microbes can swap DNA, and they are now coming to terms with rampant gene flow among more complex creatures. "A large percent of the genome is free to move around," notes Chris Jiggins, an evolutionary biologist at the University of Cambridge in the United Kingdom. This "really challenges our concept of what a species is." As a result, where biologists once envisioned a tree of life, its branches forever distinct, many now see an interconnected web.
Hybridization, says Mallet, "has become big news and there's no escaping it."

The tree metaphor is being replaced with a net or web. That's the point where Pennisi describes Ernst Mayr, Darwin's paramount tree gardener, hyperventilating in his grave. In a new world of rampant hybridization and introgression, what is to become of neo-Darwinism? Pennisi gives a glimpse of the implications, hinting at a revolutionary new view of the origin of species. Putting a happy face on the revolution, she ends this way:

The Grants believe that complete reproductive isolation is outdated as a definition of a species. They have speculated that when a species is no longer capable of exchanging genes with any other species, it loses evolutionary potential and may become more prone to extinction.
This idea has yet to be proven, and even Mallet concedes that biologists don't fully understand how hybridization and introgression drive evolution -- or how to reconcile these processes with the traditional picture of species diversifying and diverging over time. Yet for him and for others, these are heady times. "It's the world of hybrids," Rieseberg says. "And that's wonderful."

It will certainly be wonderful for intelligent design theorists, but it's hard to see how Darwinians will cope with the revolution. Why? Because HGT and hybridization involve the shuffling of pre-existing genetic information, not the origin of new genetic information. Information isn't emerging by accidental mutations; it is being shared in a biological World Wide Web! Pennisi suggests this may be advantageous:

As examples of hybridization have multiplied, so has evidence that, at least in nature, swapping DNA has its advantages. When one toxic butterfly species acquires a gene for warning coloration from another toxic species, both species benefit, as a single encounter with either species is now enough to teach predators to avoid both. Among canids, interbreeding with domestic dogs has given wolves in North America a variant of the gene for an immune protein called Î’-defensin. The variant gives wolf-dog hybrids and their descendants a distinctive black pelt and better resistance to canine distemper, Wayne says. In Asia, wolf-dog matings may have helped Tibetan mastiffs cope with the thin air at high altitudes. And interspecies gene flow has apparently allowed insecticide resistance to spread among malaria-carrying mosquitoes and the black flies that transmit river blindness.
In each case, the beneficial genetic changes unfolded faster than they would have by the normal process of mutation, which often changes DNA just one base at a time. Given the ability of hybridization and introgression to speed adaptive changes, says Baird, "closing that door [with reproductive isolation] is not necessarily going to be a good thing for your long-term survival."

Think of the possibilities for design theorists. We can see strategies for robustness with information sharing, allowing animals to survive environmental perturbations or recharge damaged genomes, for instance. Indeed, all kinds of "wonderful" possibilities open up for exploring design when information sharing is available in the explanatory toolkit. New vistas for explaining symbioses, ecosystems, and variability emerge. Could some apparent "innovations" be loans from other species? How can the information-sharing biosphere inform practical applications for medicine?


In the wonderful new "world of hybrids," ID advocates can take the lead, breathing new life into biological explanations, while the neo-Darwinists hyperventilate to delay the inevitable.

Monday, 28 November 2016

On Russia's war on religious liberty III:The Watchtower Society's commentary.

International Experts Discredit Russia’s “Expert Analysis” in Identifying “Extremism”

This is Part 3 of a three-part series based on exclusive interviews with noted scholars of religion, politics, and sociology, as well as experts in Soviet and post-Soviet studies.

ST. PETERSBURG, Russia—Jehovah’s Witnesses and their literature have been subject to court-appointed analysis by the Center for Sociocultural Expert Studies in Moscow. One study was completed in August 2015 and was used as the basis for an ongoing case against the Witnesses’ New World Translation of the Holy Scriptures, while another study is pending.
  
Highly regarded experts inside and outside of Russia debunk these studies. One such scholar, Dr. Mark R. Elliott, founding editor of the East-West Church and Ministry Report, observes: “State-approved ‘expert’ witnesses on religious questions, including those who disapproved Jehovah’s Witnesses’ scriptures, typically lack expertise and credibility as they issue ill-founded ‘opinions’ on matters of faith.”
   
  Specifically addressing the Center for Sociocultural Expert Studies, Dr. Roman Lunkin, head of the Center for Religion and Society at the Institute of Europe, Russian Academy of Sciences in Moscow, notes that “not one of the experts has a degree in religious studies and they are not even familiar with the writings of Jehovah’s Witnesses. Their analysis included quotes that were taken from information provided by the Irenaeus of Lyon Centre, a radical Orthodox anti-cult organization known for opposing Jehovah’s Witnesses, as well as many other religions and denominations.”

  “Unfortunately, I would have to agree with Dr. Lunkin,” states Dr. Ekaterina Elbakyan, professor of sociology and management of social processes at the Moscow Academy of Labor and Social Relations. “It is true that in Russia today religious expert studies are often performed by people who are not specialists, and are made-to-order, so to speak, where an expert is not free to state his true findings.”

 Dr. Elbakyan, who participated in two trials in Taganrog and was present as a specialist-expert in the appellate court in Rostov-on-Don, further explains: “I saw with my own eyes the video material on the basis of which Jehovah’s Witnesses were charged with extremism. Twice I gave a detailed commentary in court explaining that this was a typical Christian religious service and had nothing to do with extremism, but the court did not take the expert opinion into consideration. It is impossible not to see this as a clear and systematic trend toward religious discrimination. As long as this trend continues, there are, of course, no guarantees that believers will cease to be classified as ‘extremists’ because of their beliefs.”

  International: David A. Semonian, Office of Public Information, 1-718-560-5000

Russia: Yaroslav Sivulskiy, 7-812-702-2691

Sub-optimal design or sub-optimal analysis?

Shoddy Engineering or Intelligent Design? Case of the Mouse's Eye

Richard Sternberg


We often hear from Darwinians that the biological world is replete with examples of shoddy engineering, or, as they prefer to put it, bad design. One such case of really poor construction is the inverted retina of the vertebrate eye. As we all know, the retina of our eyes is configured all wrong because the cells that gather photons, the rod photoreceptors, are behind two other tissue layers. Light first strikes the ganglion cells and then passes by or through the bipolar cells before reaching the rod photoreceptors. Surely, a child could have arranged the system better -- so they tell us.

The problem with this story of supposed unintelligent design is that it is long on anthropomorphisms and short on evidence. Consider nocturnal mammals. Night vision for, say, a mouse is no small feat. Light intensities during night can be a million times less than those of the day, so the rod cells must be optimized -- yes, optimized -- to capture even the few stray photons that strike them. Given the backwards organization of the mouse's retina, how is this scavenging of light accomplished? Part of the solution is that the ganglion and bipolar cell layers are thinner in mammals that are nocturnal. But other optimizations must also occur. Enter the cell nucleus and "junk" DNA.

Only around 1.5 percent of mammalian DNA encodes proteins. Since it has become lore to equate protein-coding regions of the genome with "genes" and "information," the remaining approximately 98.5 percent of DNA has been dismissed as junk. Yet, for what is purported to be mere genetic gibberish, it is strikingly ordered along the length of the chromosome. Like the barcodes on consumer items that we are all familiar with, each chromosome has a particular banding pattern. This pattern reflects how different types of DNA sequences are linearly distributed. The "core" of a mammalian chromosome, the centromere, and the genomic segments that frame it largely consist of long tracks of species-specific repetitive elements -- these areas give rise to "C-bands" after a chemical stain has been applied. Then, alternating along the chromosome arms are two other kinds of bands that appear after different staining procedures. One called "R-bands" is rich in protein-coding genes and a particular class of retrotransposon called SINEs (for Short Interspersed Nuclear Elements). SINE sequence families are restricted to certain taxonomic groups. The other is termed "G-bands" and it has a high concentration of another class of retrotransposon called LINEs (for Long Interspersed Nuclear Elements), that can also be used to distinguish between species. Finally, the ends of the chromosome, telomeres, are comprised of a completely different set of repetitive DNA sequences.

In general, C-bands and G-bands are complexed with proteins and RNAs to give a more compact organization called heterochromatin, whereas R-bands have a more open conformation referred to as euchromatin.

Why bother with such details? Well, each of these chromosome bands has a preferred location in the cell nucleus. Open any good textbook on mammalian anatomy and you will note that cell types can often be distinguished by the shape and size of the nucleus, as well as the positions of euchromatin and heterochromatin in that organelle. Nevertheless, most cell nuclei follow a general rule where euchromatin is located in the interior, in various compartments that are dense with transcription factories, RNA processing machinery, and many other components. Heterochromatin, on the other hand, is found mainly around the periphery of the nucleus. A striking exception to this principle is found in the nuclei of rod cells in nocturnal mammals.

Reporting in the journal Cell, Irina Solovei and coworkers have just discovered that, in contrast to the nucleus organization seen in ganglion and bipolar cells of the retina, a remarkable inversion of chromosome band localities occurs in the rod photoreceptors of mammals with night vision (Solovei I, Kreysing M, Lanctôt C, Kösem S, Peichl L, Cremer T, Guck J, Joffe B. 2009. "Nuclear Architecture of Rod Photoreceptor Cells Adapts to Vision in Mammalian Evolution." Cell 137(2): 356-368). First, the C-bands of all the chromosomes including the centromere coalesce in the center of the nucleus to produce a dense chromocenter. Keep in mind that the DNA backbone of this chromocenter in different mammals is repetitive and highly species-specific. Second, a shell of LINE-rich G-band sequences surrounds the C-bands. Finally, the R-bands including all examined protein-coding genes are placed next to the nuclear envelope. The nucleus of this cell type is also smaller so as to make the pattern more compact. This ordered movement of billions of basepairs according to their "barcode status" begins in the rod photoreceptor cells at birth, at least in the mouse, and continues for weeks and months.

Why the elaborate repositioning of so much "junk" DNA in the rod cells of nocturnal mammals? The answer is optics. A central cluster of chromocenters surrounded by a layer of LINE-dense heterochromatin enables the nucleus to be a converging lens for photons, so that the latter can pass without hindrance to the rod outer segments that sense light. In other words, the genome regions with the highest refractive index -- undoubtedly enhanced by the proteins bound to the repetitive DNA -- are concentrated in the interior, followed by the sequences with the next highest level of refractivity, to prevent against the scattering of light. The nuclear genome is thus transformed into an optical device that is designed to assist in the capturing of photons. This chromatin-based convex (focusing) lens is so well constructed that it still works when lattices of rod cells are made to be disordered. Normal cell nuclei actually scatter light.

So the next time someone tells you that it "strains credulity" to think that more than a few pieces of "junk DNA" could be functional in the cell -- that the data only point to the lack of design and suboptimality -- remind them of the rod cell nuclei of the humble mouse.

Trying to reduce the irreducible?

Refuting Behe's Critics, Meyer Gives Four Reasons the Flagellum Predates the Type III Secretory System

David Klinghoffer 




Michael Behe's signature argument in Darwin's Black Box 
 would be seriously bruised if it turned out the bacterial flagellar motor had a simpler evolutionary antecedent. Critics of intelligent design thought they had identified such a precursor in the form of the Type III Secretory System, found in some bacteria.

Behe and others have since shown why it's far likelier that the flagellum is the precursor, thus leaving Dr. Behe's argument intact. In response, the critics either simply repeat their claim as if it hadn't been refuted, or they go silent -- an implicit admission they were wrong, and Behe was right.

How exactly do we know the flagellum came first? In a 12-minute video discussion, Stephen Meyer explains that we know this for four good and independent reasons. Watch for yourself, and if you're still not convinced, let me know why not. (Reach me by clicking on the orange EMAIL US button at the top of this page.)

Mike Behe's case for ID from irreducible complexity has stood the test of fire by scientists and others whose picture of reality depends on denying that biology bears evidence of design. We are celebrating the 20th anniversary of Darwin's Black Box with a new hour-long documentary written and directed by John West, Revolutionary: Michael Behe and the Mystery of Molecular Machines
 . Get your copy of Revolutionary, on DVD or Blu-ray, today.

Crossing the floor?

Peppered Moth: How Evolution's Poster Child Became the Rebuttal

Cornelius Hunter


It has been called one of the best examples of evolution observed in the wild -- light colored peppered moths (Biston betularia) became dark colored in response to 19th century industrial pollution darkening the birch trees in their environment. Evolving a darker color helped camouflage the moths, and keep them hidden from predatory birds. And more recently, air pollution reductions lightened the environment and with it, the moths also began to revert to their lighter color.

Proof of evolution, case closed, right? From popular presentations and museum exhibits, to textbooks and scientific papers, evolutionists have relentlessly pounded home the peppered moth as an undeniable confirmation of Darwin's theory in action. There's only one problem: All of this ignores the science.

There are two main problems with peppered moths story. First, changing colors is hardly a pathway leading to the kinds of massive biological change evolution requires. It is not as though a change in the peppered moth coloration is any kind of evidence for how the moths evolved, or how any other species, for that matter, could have evolved.

In fact changing the color of a moth not only fails to show how species could evolve, it also fails to show how any biological design could evolve. The peppered moth case doesn't show how metabolism, the central nervous system, bones, red blood cells, or any other biological wonder could have arisen by evolution's random mutations coupled with natural selection.

The moths were already there. Their wings were already there. Different colors were already there. The changing of color in moth populations, while certainly a good thing for the moths, is hardly an example of evolution.

Second, research strongly suggests that the cause of the darkening, at the molecular level, is an enormous genetic insertion. In other words, rather than a nucleotide, in a gene, mutating to one of the other three nucleotides, as you learned in your high school biology class, instead what has been found is an insertion of a stretch of more than 20,000 nucleotides. That long inserted segment consists of a shorter segment (about 9,000 nucleotides) repeated about two and one-third times.

Also, the insertion point is not in a DNA coding sequence, but in an intervening region (intron), which have been considered to be "junk DNA" in the past.

This observed mutation (the insertion of a long sequence of DNA into an intron), is much more complicated than a single point mutation. First, there is no change in the gene's protein product. The mutating of the protein sequence was the whole idea behind evolution: DNA mutations which lead to changes in a protein can lead to a phenotype change with fitness improvement, and there would be subject to natural selection.

That is not what we are seeing in the much celebrated peppered moth example. The DNA mutation is much more complicated (~20,000 nucleotides inserted), and the fact it was inserted into an intron suggests that additional molecular and cellular mechanisms are required for the coloration change to occur.

None of this fits evolutionary theory.

For example, evolutionary theory requires that the needed random DNA mutational change is reasonably likely to occur. Given the moth's effective population size, the moth's generation time period, and the complexity of the mutation, the needed mutation is not likely to occur. Evolution would have to be inserting segments of DNA with (i) different sequences, at (ii) different locations, within the moth genome. This is an enormous space of mutational possibilities to search through.

It doesn't add up. Evolution does not have the resources in terms of time and effective population size to come anywhere close to searching this astronomical mutational space. It's not going to happen.

A much more likely explanation, and one that has been found to be true in so many other cases of adaptation (in spite of evolutionary pushback), is that the peppered moth coloration change was directed. The environmental change and challenge somehow caused the peppered moth to modify its color. This suggests there are preprogrammed, directed adaptation mechanisms, already in place that are ready to respond to future, potential, environmental changes, which might never occur.

Far from an evidence for evolution, this is evidence against evolution.

So there are at least two major problems with what is celebrated as a key evidence for evolution in action. First, it comes nowhere close to the type of change evolution needs, and the details of the change demonstrate that it is not evolutionary to begin with.

Saturday, 26 November 2016

The Watchtower Society's commentary on the passover.

PASSOVER

Passover (Heb., peʹsach; Gr., paʹskha) was instituted the evening preceding the Exodus from Egypt. The first Passover was observed about the time of full moon, on the 14th day of Abib (later called Nisan) in the year 1513 B.C.E. This was thereafter to be celebrated annually. (Ex 12:17-20, 24-27) Abib (Nisan) falls within the months March-April of the Gregorian calendar. Passover was followed by seven days of the Festival of Unfermented Cakes, Nisan 15-21. Passover commemorates the deliverance of the Israelites from Egypt and the ‘passing over’ of their firstborn when Jehovah destroyed the firstborn of Egypt. Seasonally, it fell at the beginning of the barley harvest.—Ex 12:14, 24-47; Le 23:10.

Passover was a memorial celebration; therefore the Scriptural command was: “And it must occur that when your sons say to you, ‘What does this service mean to you?’ then you must say, ‘It is the sacrifice of the passover to Jehovah, who passed over the houses of the sons of Israel in Egypt when he plagued the Egyptians, but he delivered our houses.’”—Ex 12:26, 27.

Since the Jews reckoned the day as starting after sundown and ending the next day at sundown, Nisan 14 would begin after sundown. It would be in the evening after Nisan 13 concluded that the Passover would be observed. Since the Bible definitely states that Christ is the Passover sacrifice (1Co 5:7) and that he observed the Passover meal the evening before he was put to death, the date of his death would be Nisan 14, not Nisan 15, in order to fulfill accurately the time feature of the type, or shadow, provided in the Law.—Heb 10:1.

Laws Governing Its Observance. Each household was to choose a male sheep or goat that was sound and a year old. It was taken into the house on the 10th day of the month Abib and kept until the 14th, and then it was slaughtered and its blood was splashed with a bunch of hyssop on the doorposts and the upper part of the doorway of the dwelling in which they were to eat it (not on the threshold where the blood would be trampled on).

The lamb (or goat) was slaughtered, skinned, its interior parts cleansed and replaced, and it was roasted whole, well-done, with no bones broken. (2Ch 35:11; Nu 9:12) If the household was too small to consume the whole animal, then it was to be shared with a neighbor household and eaten that same night. Anything left over was to be burned before morning. (Ex 12:10; 34:25) It was eaten with unfermented cakes, “the bread of affliction,” and with bitter greens, for their life had been bitter under slavery.—Ex 1:14; 12:1-11, 29, 34; De 16:3.

What is meant by the expression “between the two evenings”?

The Israelites measured their day from sundown to sundown. So Passover day would begin at sundown at the end of the 13th day of Abib (Nisan). The animal was to be slaughtered “between the two evenings.” (Ex 12:6) There are differences of opinion as to the exact time meant. According to some scholars, as well as the Karaite Jews and Samaritans, this is the time between sunset and deep twilight. On the other hand, the Pharisees and the Rabbinists considered the first evening to be when the sun began to descend and the second evening to be the real sunset. Due to this latter view the rabbis hold that the lamb was slaughtered in the latter part of the 14th, not at its start, and therefore that the Passover meal was actually eaten on Nisan 15.

On this point Professors Keil and Delitzsch say: “Different opinions have prevailed among the Jews from a very early date as to the precise time intended. Aben Ezra agrees with the Caraites and Samaritans in taking the first evening to be the time when the sun sinks below the horizon, and the second the time of total darkness; in which case, ‘between the two evenings’ would be from 6 o’clock to 7.20. . . . According to the rabbinical idea, the time when the sun began to descend, viz. from 3 to 5 o’clock, was the first evening, and sunset the second; so that ‘between the two evenings’ was from 3 to 6 o’clock. Modern expositors have very properly decided in favour of the view held by Aben Ezra and the custom adopted by the Caraites and Samaritans.”—Commentary on the Old Testament, 1973, Vol. I, The Second Book of Moses, p. 12; see DAY.

From the foregoing, and particularly in view of such texts as Exodus 12:17, 18, Leviticus 23:5-7, and Deuteronomy 16:6, 7, the weight of evidence points to the application of the expression “between the two evenings” to the time between sunset and dark. This would mean that the Passover meal was eaten well after sundown on Nisan 14, for it took considerable time to slaughter, skin, and roast the animal thoroughly. Deuteronomy 16:6 commands: “You should sacrifice the passover in the evening as soon as the sun sets.” Jesus and his apostles observed the Passover meal “after evening had fallen.” (Mr 14:17; Mt 26:20) Judas went out immediately after the Passover observance, “And it was night.” (Joh 13:30) When Jesus observed the Passover with his 12 apostles, there must have been no little conversation; then, too, some time would have been occupied by Jesus in washing the apostles’ feet. (Joh 13:2-5) Hence, the institution of the Lord’s Evening Meal certainly took place quite late in the evening.—See LORD’S EVENING MEAL.

At the Passover in Egypt, the head of the family was responsible for the slaying of the lamb (or goat) at each home, and all were to stay inside the house to avoid being slain by the angel. The partakers ate in a standing position, their hips girded, staff in hand, sandals on so as to be ready for a long journey over rough ground (whereas they often did their daily work barefoot). At midnight all the firstborn of the Egyptians were slain, but the angel passed over the houses on which the blood had been spattered. (Ex 12:11, 23) Every Egyptian household in which there was a firstborn male was affected, from the house of Pharaoh himself to the firstborn of the prisoner. It was not the head of the house, even though he may have been a firstborn, but was any male firstborn in the household under the head, as well as the male firstborn of animals, that was slain.—Ex 12:29, 30; see FIRSTBORN, FIRSTLING.

The Ten Plagues upon Egypt all proved to be a judgment against the gods of Egypt, especially the tenth, the death of the firstborn. (Ex 12:12) For the ram (male sheep) was sacred to the god Ra, so that splashing the blood of the Passover lamb on the doorways would be blasphemy in the eyes of the Egyptians. Also, the bull was sacred, and the destruction of the firstborn of the bulls would be a blow to the god Osiris. Pharaoh himself was venerated as a son of Ra. The death of Pharaoh’s own firstborn would thus show the impotence of both Ra and Pharaoh.

In the Wilderness and the Promised Land. Only one Passover celebration in the wilderness is mentioned. (Nu 9:1-14) The keeping of the Passover during the wilderness journey likely was limited, for two reasons: (1) Jehovah’s original instructions were that it must be kept when they reached the Promised Land. (Ex 12:25; 13:5) (2) Those born in the wilderness had not been circumcised (Jos 5:5), whereas all male partakers of Passover had to be circumcised.—Ex 12:45-49.

Record of Passovers Observed. The Hebrew Scriptures give direct accounts of the Passover (1) in Egypt (Ex 12); (2) in the wilderness at Sinai, Nisan 14, 1512 B.C.E. (Nu 9); (3) when they reached the Promised Land, at Gilgal and after the circumcision of the males, 1473 B.C.E. (Jos 5); (4) at the time that Hezekiah restored true worship (2Ch 30); (5) the Passover of Josiah (2Ch 35); and (6) the celebration by Israel after the return from Babylonian exile (Ezr 6). (Also, mention is made of Passovers held in Samuel’s day and during the days of the kings, at 2Ch 35:18.) After the Israelites were settled in the land, the Passover festival was observed “in the place that Jehovah will choose to have his name reside,” instead of in each home or in the various cities. In time, the chosen place came to be Jerusalem.—De 16:1-8.

Accretions. After Israel had settled in the Promised Land, certain changes were made and various accretions came about in observing the Passover. They no longer partook of the feast in a standing position, or equipped for a journey, for they were then in the land that God had given them. The first-century celebrants customarily ate it while lying on their left side, with the head resting on the left hand. This explains how one of Jesus’ disciples could be “reclining in front of Jesus’ bosom.” (Joh 13:23) Wine was not used at the Passover in Egypt nor was there any command given by Jehovah for its use with the festival. This practice was introduced later on. Jesus did not condemn the use of wine with the meal, but he drank wine with his apostles and afterward offered a cup for them to drink as he introduced the Lord’s Evening Meal, the Memorial.—Lu 22:15-18, 20.

According to traditional Jewish sources, red wine was used and four cups were handed around, although the service was not restricted to four cups. Psalms 113 to 118 were sung during the meal, concluding with Psalm 118. It is likely that it was one of these psalms that Jesus and his apostles sang in concluding the Lord’s Evening Meal.—Mt 26:30.

Customs at Passover Time. Great preparations were made in Jerusalem when the festival was due, as it was a requirement of the Law that every male Israelite and every male of the circumcised alien residents observe the Passover. (Nu 9:9-14) This meant that vast numbers would be making the journey to the city for some days in advance. They would come before the Passover in order to cleanse themselves ceremonially. (Joh 11:55) It is said that men were sent out about a month early to prepare the bridges and put the roads in good order for the convenience of the pilgrims. Since contact with a dead body rendered a person unclean, special precautions were taken to protect the traveler. As it was a practice to bury persons in the open field, if they died there, the graves were made conspicuous by being whitened a month ahead. (The Temple, by A. Edersheim, 1874, pp. 184, 185) This supplies background for Jesus’ words to the scribes and Pharisees, that they resembled “whitewashed graves.”—Mt 23:27.

Accommodations were made available in the homes for those coming to Jerusalem for Passover observance. In an Oriental home all the rooms could be slept in, and several persons could be accommodated in one room. Also, the flat roof of the house could be used. Added to this is the fact that numbers of the celebrants obtained accommodations outside the city walls, especially at Bethphage and Bethany, two villages on the slopes of the Mount of Olives.—Mr 11:1; 14:3.

Questions as to Time Order. It was a question of defilement that gave rise to the words: “They themselves did not enter into the governor’s palace, that they might not get defiled but might eat the passover.” (Joh 18:28) These Jews considered it a defilement to enter into a Gentile dwelling. (Ac 10:28) This statement was made, however, “early in the day,” hence after the Passover meal had taken place. It is to be noted that at this time the entire period, including Passover day and the Festival of Unfermented Cakes that followed, was at times referred to as “Passover.” In the light of this fact, Alfred Edersheim offers the following explanation: A voluntary peace offering was made on Passover and another, a compulsory one, on the next day, Nisan 15, the first day of the Festival of Unfermented Cakes. It was this second offering that the Jews were afraid they might not be able to eat if they contracted defilement in the judgment hall of Pilate.—The Temple, 1874, pp. 186, 187.

“The first day of the unfermented cakes.” A question also arises in connection with the statement at Matthew 26:17: “On the first day of the unfermented cakes the disciples came up to Jesus, saying: ‘Where do you want us to prepare for you to eat the passover?’”

The expression “the first day” here could be rendered “the day before.” Concerning the use of the Greek word here translated “first,” a footnote on Matthew 26:17 in the New World Translation says: “Or, ‘On the day before.’ This rendering of the Gr. word [proʹtos] followed by the genitive case of the next word agrees with the sense and rendering of a like construction in Joh 1:15, 30, namely, ‘he existed before [proʹtos] me.’” According to Liddell and Scott’s Greek-English Lexicon, “[proʹtos] is sts. [sometimes] used where we should expect [proʹte·ros (meaning ‘former, earlier’)].” (Revised by H. Jones, Oxford, 1968, p. 1535) At this time, Passover day had come to be generally considered as the first day of the Festival of Unfermented Cakes. So, then, the original Greek, harmonized with Jewish custom, allows for the question to have been asked of Jesus on the day before Passover.

“Preparation.” At John 19:14, the apostle John, in the midst of his description of the final part of Jesus’ trial before Pilate, says: “Now it was preparation of the passover; it was about the sixth hour [of the daytime, between 11:00 a.m. and noon].” This, of course, was after the time of the Passover meal, which had been eaten the night before. Similar expressions are found at verses 31 and 42. Here the Greek word pa·ra·skeu·eʹ is translated “preparation.” This word seems to mark, not the day preceding Nisan 14, but the day preceding the weekly Sabbath, which, in this instance, was “a great one,” namely, not only a Sabbath by virtue of being Nisan 15, the first day of the actual Festival of Unfermented Cakes, but also a weekly Sabbath. This is understandable, since, as already stated, “Passover” was sometimes used to refer to the entire festival.—Joh 19:31; see PREPARATION.

Prophetic Significance. The apostle Paul, in urging Christians to live clean lives, attributes pictorial significance to the Passover. He says: “For, indeed, Christ our passover has been sacrificed.” (1Co 5:7) Here he likens Christ Jesus to the Passover lamb. John the Baptizer pointed to Jesus, saying: “See, the Lamb of God that takes away the sin of the world!” (Joh 1:29) John may have had in mind the Passover lamb, or he could have been thinking of the male sheep that Abraham offered up instead of his own son Isaac or of the male lamb that was offered up upon God’s altar at Jerusalem each morning and evening.—Ge 22:13; Ex 29:38-42.


Certain features of the Passover observance were fulfilled by Jesus. One fulfillment lies in the fact that the blood on the houses in Egypt delivered the firstborn from destruction at the hands of the destroying angel. Paul speaks of anointed Christians as the congregation of the firstborn (Heb 12:23), and of Christ as their deliverer through his blood. (1Th 1:10; Eph 1:7) No bones were to be broken in the Passover lamb. It had been prophesied that none of Jesus’ bones would be broken, and this was fulfilled at his death. (Ps 34:20; Joh 19:36) Thus the Passover kept by the Jews for centuries was one of those things in which the Law provided a shadow of the things to come and pointed to Jesus Christ, “the Lamb of God.”—Heb 10:1; Joh 1:29.