Search This Blog

Thursday, 15 October 2015

Rumors of wars II






Daniel Ch.11 " 1“In the first year of Darius the Mede, I arose to be an encouragement and a protection for him. 2“And now I will tell you the truth. Behold, three more kings are going to arise in Persia. Then a fourth will gain far more riches than all of them; as soon as he becomes strong through his riches, he will arouse the whole empire against the realm of Greece. 3“And a mighty king will arise, and he will rule with great authority and do as he pleases. 4“But as soon as he has arisen, his kingdom will be broken up and parceled out toward the four points of the compass, though not to his own descendants, nor according to his authority which he wielded, for his sovereignty will be uprooted and given to others besides them.
      5“Then the king of the South will grow strong, along with one of his princes who will gain ascendancy over him and obtain dominion; his domain will be a great dominion indeed. 6“After some years they will form an alliance, and the daughter of the king of the South will come to the king of the North to carry out a peaceful arrangement. But she will not retain her position of power, nor will he remain with his power, but she will be given up, along with those who brought her in and the one who sired her as well as he who supported her in those times. 7“But one of the descendants of her line will arise in his place, and he will come against their army and enter the fortress of the king of the North, and he will deal with them and display great strength. 8“Also their gods with their metal images and their precious vessels of silver and gold he will take into captivity to Egypt, and he on his part will refrain from attacking the king of the North for some years. 9“Then the latter will enter the realm of the king of the South, but will return to his own land.

      10“His sons will mobilize and assemble a multitude of great forces; and one of them will keep on coming and overflow and pass through, that he may again wage war up to his very fortress. 11“The king of the South will be enraged and go forth and fight with the king of the North. Then the latter will raise a great multitude, but that multitude will be given into the hand of the former. 12“When the multitude is carried away, his heart will be lifted up, and he will cause tens of thousands to fall; yet he will not prevail. 13“For the king of the North will again raise a greater multitude than the former, and after an interval of some years he will press on with a great army and much equipment.

      14“Now in those times many will rise up against the king of the South; the violent ones among your people will also lift themselves up in order to fulfill the vision, but they will fall down. 15“Then the king of the North will come, cast up a siege ramp and capture a well-fortified city; and the forces of the South will not stand their ground, not even their choicest troops, for there will be no strength to make a stand. 16“But he who comes against him will do as he pleases, and no one will be able to withstand him; he will also stay for a time in the Beautiful Land, with destruction in his hand. 17“He will set his face to come with the power of his whole kingdom, bringing with him a proposal of peace which he will put into effect; he will also give him the daughter of women to ruin it. But she will not take a stand for him or be on his side. 18“Then he will turn his face to the coastlands and capture many. But a commander will put a stop to his scorn against him; moreover, he will repay him for his scorn. 19“So he will turn his face toward the fortresses of his own land, but he will stumble and fall and be found no more.

      20“Then in his place one will arise who will send an oppressor through the Jewel of his kingdom; yet within a few days he will be shattered, though not in anger nor in battle. 21“In his place a despicable person will arise, on whom the honor of kingship has not been conferred, but he will come in a time of tranquility and seize the kingdom by intrigue. 22“The overflowing forces will be flooded away before him and shattered, and also the prince of the covenant. 23“After an alliance is made with him he will practice deception, and he will go up and gain power with a small force of people. 24“In a time of tranquility he will enter the richest parts of the realm, and he will accomplish what his fathers never did, nor his ancestors; he will distribute plunder, booty and possessions among them, and he will devise his schemes against strongholds, but only for a time. 25“He will stir up his strength and courage against the king of the South with a large army; so the king of the South will mobilize an extremely large and mighty army for war; but he will not stand, for schemes will be devised against him. 26“Those who eat his choice food will destroy him, and his army will overflow, but many will fall down slain. 27“As for both kings, their hearts will be intent on evil, and they will speak lies to each other at the same table; but it will not succeed, for the end is still to come at the appointed time. 28“Then he will return to his land with much plunder; but his heart will be set against the holy covenant, and he will take action and then return to his own land.

      29“At the appointed time he will return and come into the South, but this last time it will not turn out the way it did before. 30“For ships of Kittim will come against him; therefore he will be disheartened and will return and become enraged at the holy covenant and take action; so he will come back and show regard for those who forsake the holy covenant. 31“Forces from him will arise, desecrate the sanctuary fortress, and do away with the regular sacrifice. And they will set up the abomination of desolation. 32“By smooth words he will turn to godlessness those who act wickedly toward the covenant, but the people who know their God will display strength and take action. 33“Those who have insight among the people will give understanding to the many; yet they will fall by sword and by flame, by captivity and by plunder for many days. 34“Now when they fall they will be granted a little help, and many will join with them in hypocrisy. 35“Some of those who have insight will fall, in order to refine, purge and make them pure until the end time; because it is still to come at the appointed time.

      36“Then the king will do as he pleases, and he will exalt and magnify himself above every god and will speak monstrous things against the God of gods; and he will prosper until the indignation is finished, for that which is decreed will be done. 37“He will show no regard for the gods of his fathers or for the desire of women, nor will he show regard for any other god; for he will magnify himself above them all. 38“But instead he will honor a god of fortresses, a god whom his fathers did not know; he will honor him with gold, silver, costly stones and treasures. 39“He will take action against the strongest of fortresses with the help of a foreign god; he will give great honor to those who acknowledge him and will cause them to rule over the many, and will parcel out land for a price.

      40“At the end time the king of the South will collide with him, and the king of the North will storm against him with chariots, with horsemen and with many ships; and he will enter countries, overflow them and pass through. 41“He will also enter the Beautiful Land, and many countries will fall; but these will be rescued out of his hand: Edom, Moab and the foremost of the sons of Ammon. 42“Then he will stretch out his hand against other countries, and the land of Egypt will not escape. 43“But he will gain control over the hidden treasures of gold and silver and over all the precious things of Egypt; and Libyans and Ethiopians will follow at his heels. 44“But rumors from the East and from the North will disturb him, and he will go forth with great wrath to destroy and annihilate many. 45“He will pitch the tents of his royal pavilion between the seas and the beautiful Holy Mountain; yet he will come to his end, and no one will help him."

The Divine law and blood XII:The Watchtower Society's commentary.

BLOOD:

A truly marvelous fluid that circulates in the vascular system of humans and most multicelled animals; in Hebrew, dam, and in Greek, haiʹma. Blood supplies nourishment and oxygen to all parts of the body, carries away waste products, and plays a major role in safeguarding the body against infection. The chemical makeup of blood is so exceedingly complex that there is a great deal that is still unknown to scientists.

In the Bible, the soul is said to be in the blood because blood is so intimately involved in the life processes. God’s Word says: “For the soul of the flesh is in the blood, and I myself have put it upon the altar for you to make atonement for your souls, because it is the blood that makes atonement by the soul in it.” (Le 17:11) For like reason, but making the connection even more direct, the Bible says: “The soul of every sort of flesh is its blood.” (Le 17:14) Clearly, God’s Word treats both life and blood as sacred.

Taking Life. With Jehovah is the source of life. (Ps 36:9) Man cannot give back a life that he takes. “All the souls—to me they belong,” says Jehovah. (Eze 18:4) Therefore, to take life is to take Jehovah’s property. Every living thing has a purpose and a place in God’s creation. No man has the right to take life except when God permits and in the way that he instructs.

After the Flood, Noah and his sons, the progenitors of all persons alive today, were commanded to show respect for the life, the blood, of fellowmen. (Ge 9:1, 5, 6) Also, God kindly allowed them to add animal flesh to their diet. However, they had to acknowledge that the life of any animal killed for food belonged to God, doing so by pouring its blood out as water on the ground. This was like giving it back to God, not using it for one’s own purposes.—De 12:15, 16.

Man was entitled to enjoy the life that God granted him, and anyone who deprived him of that life would be answerable to God. This was shown when God said to the murderer Cain: “Your brother’s blood is crying out to me from the ground.” (Ge 4:10) Even a person hating his brother, and so wishing him dead, or slandering him or bearing false witness against him, and so endangering his life, would bring guilt upon himself in connection with the blood of his fellowman.—Le 19:16; De 19:18-21; 1Jo 3:15.

Because of God’s view of the value of life, the blood of a murdered person is said to defile the earth, and such defilement can be cleansed only by shedding the blood of the murderer. On this basis the Bible authorizes capital punishment for murder, through duly constituted authority. (Nu 35:33; Ge 9:5, 6) In ancient Israel no ransom could be taken to deliver the deliberate murderer from the death penalty.—Nu 35:19-21, 31.

Even in cases where the manslayer could not be found on investigation, the city nearest the site where the body was found was counted bloodguilty. To remove the bloodguilt, the responsible city elders had to perform the procedure required by God, had to disclaim any guilt or knowledge of the murder, and had to pray to God for his mercy. (De 21:1-9) If an accidental manslayer was not seriously concerned over the taking of a life and did not follow God’s arrangement for his protection by fleeing to the city of refuge and remaining there, the dead man’s nearest of kin was the avenger authorized and obligated to kill him in order to remove bloodguilt from the land.—Nu 35:26, 27; see AVENGER OF BLOOD.

Proper Use of Blood. There was only one use of blood that God ever approved, namely, for sacrifice. He directed that those under the Mosaic Law offer animal sacrifices to make atonement for sin. (Le 17:10, 11) It was also in harmony with His will that His Son, Jesus Christ, offered up his perfect human life as a sacrifice for sins.—Heb 10:5, 10.

The lifesaving application of Christ’s blood was prefigured in a variety of ways in the Hebrew Scriptures. At the time of the first Passover, in Egypt, the blood on the upper part of the doorway and on the doorposts of the Israelite homes protected the firstborn inside from death at the hand of God’s angel. (Ex 12:7, 22, 23; 1Co 5:7) The Law covenant, which had a typical sin-removing feature, was validated by the blood of animals. (Ex 24:5-8) The numerous blood sacrifices, particularly those offered on the Day of Atonement, were for typical sin atonement, pointing to the real sin removal by the sacrifice of Christ.—Le 16:11, 15-18.

The legal power that blood has in God’s sight as accepted by him for atonement purposes was illustrated by the pouring of blood at the base, or foundation, of the altar and the putting of it on the horns of the altar. The atonement arrangement had its basis, or foundation, in blood, and the power (represented by horns) of the sacrificial arrangement rested in blood.—Le 9:9; Heb 9:22; 1Co 1:18.

Under the Christian arrangement, the sanctity of blood was even more strongly emphasized. No longer was animal blood to be offered, for those animal offerings were only a shadow of the reality, Jesus Christ. (Col 2:17; Heb 10:1-4, 8-10) The high priest in Israel used to take a token portion of the blood into the Most Holy of the earthly sanctuary. (Le 16:14) Jesus Christ as the real High Priest entered into heaven itself, not with his blood, which was poured out on the ground (Joh 19:34), but with the value of his perfect human life as represented by blood. This life right he never forfeited by sin, but he retained it as usable for sin atonement. (Heb 7:26; 8:3; 9:11, 12) For these reasons the blood of Christ cries out for better things than the blood of righteous Abel did. Only the blood of the perfect sacrifice of the Son of God can call for mercy, while the blood of Abel as well as the blood of martyred followers of Christ cries out for vengeance.—Heb 12:24; Re 6:9-11.

To whom does the prohibition on the eating of blood apply?

Noah and his sons were allowed by Jehovah to add animal flesh to their diet after the Flood, but they were strictly commanded not to eat blood. (Ge 9:1, 3, 4) God here set out a regulation that applied, not merely to Noah and his immediate family, but to all mankind from that time on, because all those living since the Flood are descendants of Noah’s family.

Concerning the permanence of this prohibition, Joseph Benson noted: “It ought to be observed, that this prohibition of eating blood, given to Noah and all his posterity, and repeated to the Israelites, in a most solemn manner, under the Mosaic dispensation, has never been revoked, but, on the contrary, has been confirmed under the New Testament, Acts xv.; and thereby made of perpetual obligation.”—Benson’s Notes, 1839, Vol. I, p. 43.

Under the Mosaic Law. In the Law covenant made by Jehovah with the nation of Israel, he incorporated the law given to Noah. He made it clear that “bloodguilt” was attached to anyone who ignored the procedure stipulated by God’s law even in the killing of an animal. (Le 17:3, 4) The blood of an animal to be used for food was to be poured out on the ground and covered with dust. (Le 17:13, 14) Anyone who ate blood of any sort of flesh was to be ‘cut off from among his people.’ Deliberate violation of this law regarding the sacredness of blood meant being “cut off” in death.—Le 17:10; 7:26, 27; Nu 15:30, 31.

Commenting on Leviticus 17:11, 12, M’Clintock and Strong’s Cyclopædia (1882, Vol. I, p. 834) says: “This strict injunction not only applied to the Israelites, but even to the strangers residing among them. The penalty assigned to its transgression was the being ‘cut off from the people,’ by which the punishment of death appears to be intended (comp. Heb. x, 28), although it is difficult to ascertain whether it was inflicted by the sword or by stoning.”

At Deuteronomy 14:21 allowance was made for selling to an alien resident or a foreigner an animal that had died of itself or that had been torn by a beast. Thus a distinction was made between the blood of such animals and that of animals that a person slaughtered for food. (Compare Le 17:14-16.) The Israelites, as well as alien residents who took up true worship and came under the Law covenant, were obligated to live up to the lofty requirements of that Law. People of all nations were bound by the requirement at Genesis 9:3, 4, but those under the Law were held by God to a higher standard in adhering to that requirement than were foreigners and alien residents who had not become worshipers of Jehovah.

Under the Christian arrangement. The governing body of the first-century Christian congregation, under the direction of the holy spirit, ruled on the matter of blood. Their decree states: “For the holy spirit and we ourselves have favored adding no further burden to you, except these necessary things, to keep abstaining from things sacrificed to idols and from blood and from things strangled and from fornication. If you carefully keep yourselves from these things, you will prosper. Good health to you!” (Ac 15:22, 28, 29) The prohibition included flesh with the blood in it (“things strangled”).

This decree rests, ultimately, on God’s command not to eat blood, as given to Noah and his sons and, therefore, to all mankind. In this regard, the following is found in The Chronology of Antient Kingdoms Amended, by Sir Isaac Newton (Dublin, 1728, p. 184): “This law [of abstaining from blood] was ancienter than the days of Moses, being given to Noah and his sons, long before the days of Abraham: and therefore when the Apostles and Elders in the Council at Jerusalem declared that the Gentiles were not obliged to be circumcised and keep the law of Moses, they excepted this law of abstaining from blood, and things strangled, as being an earlier law of God, imposed not on the sons of Abraham only, but on all nations, while they lived together in Shinar under the dominion of Noah: and of the same kind is the law of abstaining from meats offered to Idols or false Gods, and from fornication.”—Italics his.

Observed since apostolic times. The Jerusalem council sent its decision to the Christian congregations to be observed. (Ac 16:4) About seven years after the Jerusalem council issued the decree, Christians continued to comply with the “decision that they should keep themselves from what is sacrificed to idols as well as from blood and what is strangled and from fornication.” (Ac 21:25) And more than a hundred years later, in 177 C.E., in Lyons (now in France), when religious enemies falsely accused Christians of eating children, a woman named Biblis said: “How would such men eat children, when they are not allowed to eat the blood even of irrational animals?”—The Ecclesiastical History, by Eusebius, V, I, 26.

Early Christians abstained from eating any sort of blood. In this regard Tertullian (c. 155-a. 220 C.E.) pointed out in his work Apology (IX, 13, 14): “Let your error blush before the Christians, for we do not include even animals’ blood in our natural diet. We abstain on that account from things strangled or that die of themselves, that we may not in any way be polluted by blood, even if it is buried in the meat. Finally, when you are testing Christians, you offer them sausages full of blood; you are thoroughly well aware, of course, that among them it is forbidden; but you want to make them transgress.” Minucius Felix, a Roman lawyer who lived until about 250 C.E., made the same point, writing: “For us it is not permissible either to see or to hear of human slaughter; we have such a shrinking from human blood that at our meals we avoid the blood of animals used for food.”—Octavius, XXX, 6.


Integrity Involved. From the time that the new covenant was inaugurated over the blood of Jesus Christ, Christians have recognized the life-giving value of this blood through Jehovah’s arrangement and through Jesus as the great High Priest who “entered, no, not with the blood of goats and of young bulls, but with his own blood, once for all time into the holy place and obtained an everlasting deliverance for us.” Through faith in the blood of Christ, Christians have had their consciences cleansed from dead works so that they may render sacred service to the living God. They are concerned about their physical health, but they are primarily and far more seriously concerned with their spiritual health and their standing before the Creator. They want to maintain their integrity to the living God, not denying the sacrifice of Jesus, not counting it as of no value, and not trampling it underfoot. For they are seeking, not the life that is transitory, but everlasting life.—Heb 9:12, 14, 15; 10:28, 29.

Wednesday, 14 October 2015

Why Scientists are seeking protection from themselves.

How scientists fool themselves – and how they can stop:
Humans are remarkably good at self-deception. But growing concern about reproducibility is driving many researchers to seek ways to fight their own worst instincts.


Regina Nuzzo:

In 2013, five years after he co-authored a paper showing that Democratic candidates in the United States could get more votes by moving slightly to the right on economic policy1, Andrew Gelman, a statistician at Columbia University in New York City, was chagrined to learn of an error in the data analysis. In trying to replicate the work, an undergraduate student named Yang Yang Hu had discovered that Gelman had got the sign wrong on one of the variables.

Gelman immediately published a three-sentence correction, declaring that everything in the paper's crucial section should be considered wrong until proved otherwise.

Reflecting today on how it happened, Gelman traces his error back to the natural fallibility of the human brain: “The results seemed perfectly reasonable,” he says. “Lots of times with these kinds of coding errors you get results that are just ridiculous. So you know something's got to be wrong and you go back and search until you find the problem. If nothing seems wrong, it's easier to miss it.”


This is the big problem in science that no one is talking about: even an honest person is a master of self-deception. Our brains evolved long ago on the African savannah, where jumping to plausible conclusions about the location of ripe fruit or the presence of a predator was a matter of survival. But a smart strategy for evading lions does not necessarily translate well to a modern laboratory, where tenure may be riding on the analysis of terabytes of multidimensional data. In today's environment, our talent for jumping to conclusions makes it all too easy to find false patterns in randomness, to ignore alternative explanations for a result or to accept 'reasonable' outcomes without question — that is, to ceaselessly lead ourselves astray without realizing it.

Failure to understand our own biases has helped to create a crisis of confidence about the reproducibility of published results, says statistician John Ioannidis, co-director of the Meta-Research Innovation Center at Stanford University in Palo Alto, California. The issue goes well beyond cases of fraud. Earlier this year, a large project that attempted to replicate 100 psychology studies managed to reproduce only slightly more than one-third2. In 2012, researchers at biotechnology firm Amgen in Thousand Oaks, California, reported that they could replicate only 6 out of 53 landmark studies in oncology and haematology3. And in 2009, Ioannidis and his colleagues described how they had been able to fully reproduce only 2 out of 18 microarray-based gene-expression studies4.

Although it is impossible to document how often researchers fool themselves in data analysis, says Ioannidis, findings of irreproducibility beg for an explanation. The study of 100 psychology papers is a case in point: if one assumes that the vast majority of the original researchers were honest and diligent, then a large proportion of the problems can be explained only by unconscious biases. “This is a great time for research on research,” he says. “The massive growth of science allows for a massive number of results, and a massive number of errors and biases to study. So there's good reason to hope we can find better ways to deal with these problems.”



“When crises like this issue of reproducibility come along, it's a good opportunity to advance our scientific tools,” says Robert MacCoun, a social scientist at Stanford. That has happened before, when scientists in the mid-twentieth century realized that experimenters and subjects often unconsciously changed their behaviour to match expectations. From that insight, the double-blind standard was born.

“People forget that when we talk about the scientific method, we don't mean a finished product,” says Saul Perlmutter, an astrophysicist at the University of California, Berkeley. “Science is an ongoing race between our inventing ways to fool ourselves, and our inventing ways to avoid fooling ourselves.” So researchers are trying a variety of creative ways to debias data analysis — strategies that involve collaborating with academic rivals, getting papers accepted before the study has even been started and working with strategically faked data.

The problem
Although the human brain and its cognitive biases have been the same for as long as we have been doing science, some important things have changed, says psychologist Brian Nosek, executive director of the non-profit Center for Open Science in Charlottesville, Virginia, which works to increase the transparency and reproducibility of scientific research. Today's academic environment is more competitive than ever. There is an emphasis on piling up publications with statistically significant results — that is, with data relationships in which a commonly used measure of statistical certainty, the p-value, is 0.05 or less. “As a researcher, I'm not trying to produce misleading results,” says Nosek. “But I do have a stake in the outcome.” And that gives the mind excellent motivation to find what it is primed to find.

“I'm not trying to produce misleading results — but I do have a stake in the outcome.”
Another reason for concern about cognitive bias is the advent of staggeringly large multivariate data sets, often harbouring only a faint signal in a sea of random noise. Statistical methods have barely caught up with such data, and our brain's methods are even worse, says Keith Baggerly, a statistician at the University of Texas MD Anderson Cancer Center in Houston. As he told a conference on challenges in bioinformatics last September in Research Triangle Park, North Carolina, “Our intuition when we start looking at 50, or hundreds of, variables sucks.”

Andrew King, a management specialist at Dartmouth College in Hanover, New Hampshire, says that the widespread use of point-and-click data-analysis software has made it easy for researchers to sift through massive data sets without fully understanding the methods, and to find small p-values that may not actually mean anything. “I believe we are in the steroids era of social science,” he says. “I've been guilty of using some of these performance-enhancing practices myself. My sense is that most researchers have fallen at least once.”

Just as in competitive sport, says Hal Pashler, a psychologist at the University of California, San Diego, this can set up a vicious circle of chasing increasingly better results. When a few studies in behavioural neuroscience started reporting improbably strong correlations of 0.85, Pashler says, researchers who had more moderate (and plausible) results started to worry: “Gee, I just got a 0.4, so maybe I'm not really doing this very well.”


View How scientists fool themselves — and how they can stop
Hypothesis myopia
One trap that awaits during the early stages of research is what might be called hypothesis myopia: investigators fixate on collecting evidence to support just one hypothesis; neglect to look for evidence against it; and fail to consider other explanations. “People tend to ask questions that give 'yes' answers if their favoured hypothesis is true,” says Jonathan Baron, a psychologist at the University of Pennsylvania in Philadelphia.

For example, says Baron, studies have tried to show how disgust influences moral condemnation, “by putting the subject in a messy room, or a room with 'fart spray' in the air”. The participants are then asked to judge how to respond to moral transgressions; if those who have been exposed to clutter or smells favour harsher punishments, researchers declare their 'disgust hypothesis' to be supported5. But they have not considered competing explanations, he says, and so they ignore the possibility that participants are lashing out owing to anger at their foul treatment, not simply disgust. By focusing on one hypothesis, researchers might be missing the real story entirely.



Courtrooms face a similar problem. In 1999, a woman in Britain called Sally Clark was found guilty of murdering two of her sons, who had died suddenly as babies. A factor in her conviction was the presentation of statistical evidence that the chances of two children in the same family dying of sudden infant death syndrome (SIDS) were only 1 in 73 million — a figure widely interpreted as fairly damning. Yet considering just one hypothesis leaves out an important part of the story. “The jury needs to weigh up two competing explanations for the babies' deaths: SIDS or murder,” wrote statistician Peter Green on behalf of the Royal Statistical Society in 2002 (see go.nature.com/ochsja). “The fact that two deaths by SIDS is quite unlikely is, taken alone, of little value. Two deaths by murder may well be even more unlikely. What matters is the relative likelihood of the deaths under each explanation, not just how unlikely they are under one explanation.” Mathematician Ray Hill of the University of Salford, UK, later estimated6 that a double SIDS death would occur in roughly 1 out of 297,000 families, whereas two children would be murdered by a parent in roughly 1 out of 2.7 million families — a likelihood ratio of 9 to 1 against murder. In 2003, Clark's conviction was overturned on the basis of new evidence. The Attorney General for England and Wales went on to release two other women who had been convicted of murdering their children on similar statistical grounds.

The Texas sharpshooter
A cognitive trap that awaits during data analysis is illustrated by the fable of the Texas sharpshooter: an inept marksman who fires a random pattern of bullets at the side of a barn, draws a target around the biggest clump of bullet holes, and points proudly at his success.

His bullseye is obviously laughable — but the fallacy is not so obvious to gamblers who believe in a 'hot hand' when they have a streak of wins, or to people who see supernatural significance when a lottery draw comes up as all odd numbers.

Nor is it always obvious to researchers. “You just get some encouragement from the data and then think, well, this is the path to go down,” says Pashler. “You don't realize you had 27 different options and you picked the one that gave you the most agreeable or interesting results, and now you're engaged in something that's not at all an unbiased representation of the data.”



Psychologist Uri Simonsohn at the University of Pennsylvania, gives an explicit nod to this naivety in his definition of 'p-hacking': “Exploiting — perhaps unconsciously — researcher degrees of freedom until p < 0.05.” In 2012, a study of more than 2,000 US psychologists7 suggested how common p-hacking is. Half had selectively reported only studies that 'worked', 58% had peeked at the results and then decided whether to collect more data, 43% had decided to throw out data only after checking its impact on the p-value and 35% had reported unexpected findings as having been predicted from the start, a practice that psychologist Norbert Kerr of Michigan State University in East Lansing has called HARKing, or hypothesizing after results are known. Not only did the researchers admit to these p-hacking practices, but they defended them.

This May, a journalist described how he had teamed up with a German documentary filmmaker and demonstrated that creative p-hacking, carried out over one “beer-fueled” weekend, could be used to 'prove' that eating chocolate leads to weight loss, reduced cholesterol levels and improved well-being (see go.nature.com/blkpke). They gathered 18 different measurements — including weight, blood protein levels and sleep quality — on 15 people, a handful of whom had eaten some extra chocolate for a few weeks. With that many comparisons, the odds were better than 50–50 that at least one of them would look statistically significant just by chance. As it turns out, three of them did — and the team cherry-picked only those to report.

Asymmetric attention
The data-checking phase holds another trap: asymmetric attention to detail. Sometimes known as disconfirmation bias, this happens when we give expected results a relatively free pass, but we rigorously check non-intuitive results. “When the data don't seem to match previous estimates, you think, 'Oh, boy! Did I make a mistake?'” MacCoun says. “We don't realize that probably we would have needed corrections in the other situation as well.”

“When the data don't seem to match previous estimates, you think, 'oh, boy! Did I make a mistake?'”
The evidence suggests that scientists are more prone to this than one would think. A 2004 study8 observed the discussions of researchers from 3 leading molecular-biology laboratories as they worked through 165 different lab experiments. In 88% of cases in which results did not align with expectations, the scientists blamed the inconsistencies on how the experiments were conducted, rather than on their own theories. Consistent results, by contrast, were given little to no scrutiny.

In 2011, an analysis of over 250 psychology papers found9 that more than 1 in 10 of the p-values was incorrect — and that when the errors were big enough to change the statistical significance of the result, more than 90% of the mistakes were in favour of the researchers' expectations, making a non-significant finding significant.

Just-so storytelling
As data-analysis results are being compiled and interpreted, researchers often fall prey to just-so storytelling — a fallacy named after the Rudyard Kipling tales that give whimsical explanations for things such as how the leopard got its spots. The problem is that post-hoc stories can be concocted to justify anything and everything — and so end up truly explaining nothing. Baggerly says that he has seen such stories in genetics studies, when an analysis implicates a huge number of genes in a particular trait or outcome. “It's akin to a Rorschach test,” he said at the bioinformatics conference. Researchers will find a story, he says, “whether it's there or not. The problem is that occasionally it ain't real.”



Another temptation is to rationalize why results should have come up a certain way but did not — what might be called JARKing, or justifying after results are known. Matthew Hankins, a statistician at King's College London, has collected more than 500 creative phrases that researchers use to convince readers that their non-significant results are worthy of attention (see go.nature.com/pwctoq). These include “flirting with conventional levels of significance (p > 0.1)”, “on the very fringes of significance (p = 0.099)” and “not absolutely significant but very probably so (p > 0.05)”.

The solution
In every one of these traps, cognitive biases are hitting the accelerator of science: the process of spotting potentially important scientific relationships. Countering those biases comes down to strengthening the 'brake': the ability to slow down, be sceptical of findings and eliminate false positives and dead ends.

“Science is an ongoing race between our inventing ways to fool ourselves, and our inventing ways to avoid fooling ourselves.”
One solution that is piquing interest revives an old tradition: explicitly considering competing hypotheses, and if possible working to develop experiments that can distinguish between them. This approach, called strong inference10, attacks hypothesis myopia head on. Furthermore, when scientists make themselves explicitly list alternative explanations for their observations, they can reduce their tendency to tell just-so stories.

In 2013, researchers reported11 using strong-inference techniques in a study of what attracts female túngara frogs (Engystomops pustulosus) during mating calls. The existing data could be explained equally well by two competing theories — one in which females have a preset neural template for mating calls, and another in which they flexibly combine auditory cues and visual signals such as the appearance of the males' vocal sacs. So the researchers developed an experiment for which the two theories had opposing predictions. The results showed that females can use multisensory cues to judge attractiveness.

Transparency
Another solution that has been gaining traction is open science. Under this philosophy, researchers share their methods, data, computer code and results in central repositories, such as the Center for Open Science's Open Science Framework, where they can choose to make various parts of the project subject to outside scrutiny. Normally, explains Nosek, “I have enormous flexibility in how I analyse my data and what I choose to report. This creates a conflict of interest. The only way to avoid this is for me to tie my hands in advance. Precommitment to my analysis and reporting plan mitigates the influence of these cognitive biases.”

An even more radical extension of this idea is the introduction of registered reports: publications in which scientists present their research plans for peer review before they even do the experiment. If the plan is approved, the researchers get an 'in-principle' guarantee of publication, no matter how strong or weak the results turn out to be. This should reduce the unconscious temptation to warp the data analysis, says Pashler. At the same time, he adds, it should keep peer reviewers from discounting a study's results or complaining after results are known. “People are evaluating methods without knowing whether they're going to find the results congenial or not,” he says. “It should create a much higher level of honesty among referees.” More than 20 journals are offering or plan to offer some format of registered reports.

Team of rivals
When it comes to replications and controversial topics, a good debiasing approach is to bypass the typical academic back-and-forth and instead invite your academic rivals to work with you. An adversarial collaboration has many advantages over a conventional one, says Daniel Kahneman, a psychologist at Princeton University in New Jersey. “You need to assume you're not going to change anyone's mind completely,” he says. “But you can turn that into an interesting argument and intelligent conversation that people can listen to and evaluate.” With competing hypotheses and theories in play, he says, the rivals will quickly spot flaws such as hypothesis myopia, asymmetric attention or just-so storytelling, and cancel them out with similar slants favouring the other side.

Psychologist Eric-Jan Wagenmakers of the University of Amsterdam has engaged in this sort of proponent–sceptic collaboration, when he teamed up with another group in an attempt12 to replicate its research suggesting that horizontal eye movements help people to retrieve events from their memory. It is often difficult to get researchers whose original work is under scrutiny to agree to this kind of adversarial collaboration, he says. The invitation is “about as attractive as putting one's head on a guillotine — there is everything to lose and not much to gain”. But the group that he worked with was eager to get to the truth, he says. In the end, the results were not replicated. The sceptics remained sceptical, and the proponents were not convinced by a single failure to replicate. Yet this was no stalemate. “Although our adversarial collaboration has not resolved the debate,” the researchers wrote, “it has generated new testable ideas and has brought the two parties slightly closer.” Wagenmakers suggests several ways in which this type of collaboration could be encouraged, including a prize for best adversarial collaboration, or special sections for such collaborations in top journals.

Blind data analysis
One debiasing procedure has a solid history in physics but is little known in other fields: blind data analysis (see page 187). The idea is that researchers who do not know how close they are to desired results will be less likely to find what they are unconsciously looking for13.



One way to do this is to write a program that creates alternative data sets by, for example, adding random noise or a hidden offset, moving participants to different experimental groups or hiding demographic categories. Researchers handle the fake data set as usual — cleaning the data, handling outliers, running analyses — while the computer faithfully applies all of their actions to the real data. They might even write up the results. But at no point do the researchers know whether their results are scientific treasures or detritus. Only at the end do they lift the blind and see their true results — after which, any further fiddling with the analysis would be obvious cheating.

Perlmutter used this method for his team's work on the Supernova Cosmology Project in the mid-2000s. He knew that the potential for the researchers to fool themselves was huge. They were using new techniques to replicate estimates of two crucial quantities in cosmology — the relative abundances of matter and of dark energy — which together reveal whether the Universe will expand forever or eventually collapse into a Big Crunch. So their data were shifted by an amount known only to the computer, leaving them with no idea what their findings implied until everyone agreed on the analyses and the blind could be safely lifted. After the big reveal, not only were the researchers pleased to confirm earlier findings of an expanding Universe14, Perlmutter says, but they could be more confident in their conclusions. “It's a lot more work in some sense, but I think it leaves you feeling much safer as you do your analysis,” he says. He calls blind data analysis “intellectual hygiene, like washing your hands”.

Data blinding particularly appeals to young researchers, Perlmutter says — not least because of the sense of suspense it gives. He tells the story of a recent graduate student who had spent two years under a data blind as she analysed pairs of supernova explosions. After a long group meeting, Perlmutter says, the student presented all her analyses and said that she was ready to unblind if everyone agreed.

“It was 6 o'clock in the evening and time for dinner,” says Perlmutter. And everyone in the audience said, “If the result comes out wrong, it's going to be a very disappointing evening, and she's going to have to think really hard about what she's going to do with her PhD thesis. Maybe we should wait until morning.”


“And we all looked at each other, and we said, 'Nah! Let's unblind now!' So we unblinded, and the results looked great, and we all cheered and applauded.”

Now butterflies take the witness stand for design.

Design of Life" Evidence Continues Pouring Forth: Butterflies
Evolution News & Views October 13, 2015 10:32 AM


With the introduction of Illustra's Design of Life Collection, fans of the films can now get all three nature documentaries in a discount package to share. The films, of course, could never hope to completely cover the subjects of butterflies, birds, and marine life in a single hour each, but they provide enough evidence to challenge scientific materialism and make a convincing case for design in three separate realms of life. See the trailer here:









One hopes the life stories will stimulate viewers to continue learning about the animals featured in each documentary. To assist with that ongoing education, we offer examples of the evidence for intelligent design that continues to pour forth from meadows, skies, and seas.

Butterflies, for one, have been in the news recently, partially for the way they display amazing design. The film Metamorphosis: The Beauty and Design of Butterflies showed electron micrographs of scales on a butterfly wing that look like overlapping roof shingles. But even at that magnified scale, there's more design than meets the eye. German and Australian researchers publishing in the Proceedings of the National Academy of Sciences were astonished by what they saw in the scales of a Green Hairstreak butterfly. This small, bright green flyer, found from England to Siberia, has the ability to build 3-D microstructures with left- and right-handed curls, called gyroids, even though the chitin protein only has left-handed amino acids. Could research into the mechanism inspire something beautiful in manufacturing?

Arthropod biophotonic nanostructures provide a plethora of complex geometries. Although the variety of geometric forms observed reflects those found in amphiphilic self-assembly, the biological formation principles are more complex. This paper addresses the chiral single gyroid in the Green Hairstreak butterfly Callophrys rubi, robustly showing that the formation process produces both the left- and right-handed enantiomers but with distinctly different likelihood. An interpretation excludes the molecular chirality of chitin as the determining feature of the enantiomeric type, emphasizing the need to identify other chirality-specific factors within the membrane-based biological formation model. These findings contribute to an understanding of nature's ability to control secondary features of the structure formation, such as enantiomeric type and crystallographic texture, informing bioinspired self-assembly strategies. [Emphasis added.]
What this means is that inside the chrysalis, there's more going on than simple self-assembly of structural units. Something controls the way they are organized as they grow patterns on the wings, perhaps for some "photonic purpose" (e.g., reflecting or intensifying light). "More importantly, they show the level of control that morphogenesis exerts over secondary features of biological nanostructures, such as chirality or crystallographic texture, providing inspiration for biomimetic replication strategies for synthetic self-assembly mechanisms." Design breeds design!

Speaking of inspiration, a while back we described one species, the Morpho butterfly, that has inspired multiple technologies. This brilliant blue flyer is in the news again, this time in Nature Communications. You can tell who the winner is from the title of the paper: "Towards outperforming conventional sensor arrays with fabricated individual photonic vapour sensors inspired by Morpho butterflies." Note the number of times the word design is used in the Abstract -- four mentions in one paragraph.

Here we show individual nanofabricated sensors that not only selectively detect separate vapours in pristine conditions but also quantify these vapours in mixtures, and when blended with a variable moisture background. Our sensor design is inspired by the iridescent nanostructure and gradient surface chemistry of Morpho butterflies and involves physical and chemical design criteria. The physical design involves optical interference and diffraction on the fabricated periodic nanostructures and uses optical loss in the nanostructure to enhance the spectral diversity of reflectance. The chemical design uses spatially controlled nanostructure functionalization. Thus, while quantitation of analytes in the presence of variable backgrounds is challenging for most sensor arrays, we achieve this goal using individual multivariable sensors. These colorimetric sensors can be tuned for numerous vapour sensing scenarios in confined areas or as individual nodes for distributed monitoring.
The butterfly design outperforms conventional sensor arrays. But just because butterflies have wonderful design doesn't mean they don't face trouble. Monarch butterflies are getting hit with a one-two punch. Their habitat continues to shrink in Mexico, while the milkweed they depend on is getting depleted as American farmers spray pesticides indiscriminately outside their crop boundaries. PhysOrg reported recently that deforestation in Mexico's Monarch reserve has more than tripled, "reversing several years of steady improvements." If you want to help save these heroes of Metamorphosis, there are things you can do, such as planting milkweed in your backyard, Dan Ashe says in National Geographic. You can also join the ranks of volunteers who collect data on butterfly sightings to help scientists track their numbers. Science Magazine News talks about the importance of good statistics to understand the seriousness of the Monarchs' plight. The University of Michigan, meanwhile, is studying the effects of carbon dioxide on Monarchs if atmospheric concentrations continue to rise.


The Arctic fritillary, a species from Greenland, is also in trouble. Danish biologists from Aarhus University are concerned that these bright orange butterflies with black bars on their wings are getting smaller as their habitat warms due to climate change. Setting aside the contentious question of anthropogenic warming, pause for a moment and ask yourself, How can a tiny butterfly survive in the Arctic? These little insects with paper-thin wings have are not warm-blooded and have no parkas, yet they feel right at home in the cold of northeastern Greenland! That sounds like a pretty good design story right there.

Tuesday, 13 October 2015

Democracy with Chinese characteristics?:Pros and Cons

Ukraine upholds religious liberty.

High Court of Ukraine Upholds Right to Conscientious Objection During Military Mobilization


Ukraine’s high court has affirmed that conscientious objectors have the right to alternative service even in times of civil unrest and war. This decision has broad implications for human rights, both in Ukraine and abroad.
Vitaliy Shalaiko, one of Jehovah’s Witnesses, was accused of evading military service during mobilization because he requested alternative service when summoned for conscription. Both the trial court and the appeal court had acquitted him, but the prosecutor appealed to the High Specialized Court of Ukraine for Civil and Criminal Cases. On June 23, 2015, the high court dismissed the appeal, thereby finalizing the lower courts’ decisions.
The high court affirmed that “the trial court was fully justified in referring to the corresponding provisions of the European Convention on Human Rights and the judgments of the European Court of Human Rights.” The high court also agreed with the trial court that the case of Bayatyan v. Armenia applied. This case was decided by the Grand Chamber of the European Court of Human Rights on July 7, 2011. That landmark judgment held that conscientious objection to military service based on sincerely held religious beliefs falls under the protection of Article 9 of the European Convention on Human Rights. In the case of Vitaliy Shalaiko, Ukraine’s high court made clear that the rights of conscientious objectors are protected even if a country mobilizes for armed conflict and not just when there are routine call-ups for military service. The high court’s decision is final, with no further appeal available.
This final ruling relieves Mr. Shalaiko of much anxiety. He states: “I understand my country’s interest in safeguarding its citizens by military mobilization. While my conscience does not permit me to perform military service, I am nevertheless willing to do my part in performing alternative civilian service. I am grateful that the courts have recognized that my refusal of military service is based on my sincere religious beliefs.”

A Decision That Benefits Many

Thousands of Jehovah’s Witnesses throughout Ukraine have faced the issue of neutrality during mobilization. Those who face criminal charges of evading military service can now rely on the legal precedent established in Vitaliy Shalaiko’s case.
Mr. Shalaiko’s attorney, Mr. Vadim Karpov, noted: “In simple terms, the high court explains that as one of Jehovah’s Witnesses, Mr. Shalaiko could not be prosecuted for refusing military service. Even in a country such as Ukraine, which is divided by war and instability, it is significant that norms of international law on freedom of religion and on freedom of conscience have been applied.”

Ukraine Sets an Example in Respecting Human Rights

The courts of Ukraine have recognized that conscientious objection to military service is a fundamental human right that merits protection even during military mobilization. It is neither a selfish evasion of duty nor a threat to national interests and security. In affirming the rulings of the lower courts, the high court has upheld human rights for all Ukrainians. Ukraine has set an example for countries that punish conscientious objectors who refuse military service for reasons of conscience.

Darwinism fails even with all the time in the world.

Debunking a popular myth: ”There’s plenty of time for evolution”

At this point, I imagine Matzke will want to cite a 2010 paper in Proceedings of the U.S. National Academy of Sciences (PNAS), titled “There’s plenty of time for evolution” by Herbert S. Wilf and Warren J. Ewens, a biologist and a mathematician at the University of Pennsylvania. Although it does not refer to them by name, there’s little doubt that Wilf and Ewens intended their work to respond to the arguments put forward by intelligent-design proponents, since it declares in its first paragraph:
…One of the main objections that have been raised holds that there has not been enough time for all of the species complexity that we see to have evolved by random mutations. Our purpose here is to analyze this process, and our conclusion is that when one takes account of the role of natural selection in a reasonable way, there has been ample time for the evolution that we observe to have taken place.
Evolutionary biologist Professor Jerry Coyne praised the paper, saying that it provides “one step towards dispelling the idea that Darwinian evolution works too slowly to account for the diversity of life on Earth today.” Famous last words.
A 2012 paper, Time and Information in Evolution, by Winston Ewert, Ann Gauger, William Dembski and Robert Marks II, contains a crushing refutation of Wilf and Ewens’ claim that there’s plenty of time for evolution to occur. The authors of the new paper offer a long list of reasons why Wilf and Ewens’ model of evolution isn’t biologically realistic:
Wilf and Ewens argue in a recent paper that there is plenty of time for evolution to occur. They base this claim on a mathematical model in which beneficial mutations accumulate simultaneously and independently, thus allowing changes that require a large number of mutations to evolve over comparatively short time periods. Because changes evolve independently and in parallel rather than sequentially, their model scales logarithmically rather than exponentially. This approach does not accurately reflect biological evolution, however, for two main reasons. First, within their model are implicit information sources, including the equivalent of a highly informed oracle that prophesies when a mutation is “correct,” thus accelerating the search by the evolutionary process. Natural selection, in contrast, does not have access to information about future benefits of a particular mutation, or where in the global fitness landscape a particular mutation is relative to a particular target. It can only assess mutations based on their current effect on fitness in the local fitness landscape. Thus the presence of this oracle makes their model radically different from a real biological search through fitness space. Wilf and Ewens also makeunrealistic biological assumptions that, in effect, simplify the search. They assume no epistasis between beneficial mutations, no linkage between loci, and an unrealistic population size and base mutation rate, thus increasing the pool of beneficial mutations to be searched. They neglect the effects of genetic drift on the probability of fixation and the negative effects of simultaneously accumulating deleterious mutations. Finally, in their model they represent each genetic locus as a single letter. By doing so, they ignore the enormous sequence complexity of actual genetic loci (typically hundreds or thousands of nucleotides long), and vastly oversimplify the search for functional variants. In similar fashion, they assume that each evolutionary “advance” requires a change to just one locus, despite the clear evidence that most biological functions are the product of multiple gene products working together. Ignoring these biological realities infuses considerable active information into their model and eases the model’s evolutionary process.
After reading this devastating refutation of Wilf and Ewens’ 2012 paper, I think it would be fair to conclude that we don’t currently have an adequate mathematical model explaining how macroevolution can occur at all, let alone one showing that it can take place within the time available. Four billion years might sound like a long time, but if your model requires not billions, but quintillions of years for it to work, then obviously, your model of macroevolution isn’t mathematically up to scratch.

Debunking another popular myth: “The eye could have evolved in a relatively short period.”

Parts of the eye: 1. vitreous body 2. ora serrata 3. ciliary muscle 4. ciliary zonules 5. canal of Schlemm 6. pupil 7. anterior chamber 8. cornea 9. iris 10. lens cortex 11. lens nucleus 12. ciliary process 13. conjunctiva 14. inferior oblique muscle 15. inferior rectus muscle 16. medial rectus muscle 17. retinal arteries and veins 18. optic disc 19. dura mater 20. central retinal artery 21. central retinal vein 22. optic nerve 23. vorticose vein 24. bulbar sheath 25. macula 26. fovea 27. sclera 28. choroid 29. superior rectus muscle 30. retina. Image courtesy of Chabacano and Wikipedia.
In 1994, Dan-Erik Nilsson and Susanne Pelger of Lund University in Sweden wrote a paper entitled, A Pessimistic Estimate of the Time Required for an Eye to Evolve(Proceedings: Biological Sciences, Vol. 256, No. 1345, April 22 1994, pp. 53-58) in which they cautiously estimated the time required for a fully-developed lens eye to develop from a light-sensitive spot to be no more than 360,000 years or so.
In 2003, the mathematician David Berlinski wrote an incisive critique of this outlandish claim. (See here for Nilsson’s response.) Some of Berlinski’s contentions turned out to be based on a misunderstanding of Nilsson and Pelger’s data, but Berlinski scored significantly when he pointed out that Nilsson and Pelger’s paper was lacking in the mathematical details one might expect in support of their claim that the eye took only 360,000 years to evolve:
Nilsson and Pelger’s paper contains no computer simulation, and no computer simulation has been forthcoming from them in all the years since its initial publication…
There are two equations in Nilsson and Pelger’s paper, and neither requires a computer for its solution; and there are no others.
Indeed, Nilsson had even admitted as much, in correspondence with Berlinski:
You are right that my article with Pelger is not based on computer simulation of eye evolution. I do not know of anyone else who [has] successfully tried to make such a simulation either. But we are currently working on it.”
That was in 2001. As far as I am aware, no simulation has since been forthcoming from Nilsson and Pelger, although as we’ll see below, a genetic algorithm developed by an Israeli researcher in 2007 demonstrated that their model was based on wildly optimistic assumptions about evolutionary pathways.
In the meantime, Nilsson and Pelger’s 1994 paper has been gleefully cited by evolutionary biologists as proof that the origin of complex structures is mathematically modelable. Here is how Professor Jerry Coyne describes Nilsson and Pelger’s work in his book, Why Evolution Is True:
We can, starting with a simple precursor, actually model the evolution of the eye and see whether selection can turn that precursor into a more complex eye within a reasonable amount of time. Dan Nilsson and Susanne Pelger of Lund University in Sweden made such a mathematical model, starting with a patch of light-sensitive cells backed by a pigment layer (a retina). They then allowed the tissues around this structure to deform themselves randomly, limiting the amount of change to only 1% of size or thickness at each step. To mimic natural selection, the model accepted only mutations that improved the visual acuity, and rejected those that degraded it.
Within an amazingly short time, the model yielded a complex eye, going through stages similar to the real-animal series described above. The eyes folded inward to form a cup, the cup became capped with a transparent surface, and the interior of the cup gelled to form not only a lens, but a lens with dimensions that produced the best possible image.
Beginning with a flatworm-like eyespot, then, the model produced something like the complex eye of vertebrates, all through a series of tiny adaptive steps – 1,829 of them, to be exact. But Nilsson and Pelget could also calculate how long this process would take. To do this, they made some assumptions about how much genetic variation for eye shape existed in the population that began experiencing selection, and how strongly selection would favor each useful step in eye size. These assumptions were deliberately conservative, assuming that there were reasonable but not large amounts of genetic variation and that natural selection was very weak. Nevertheless, the eye evolved very quickly: the entire process from rudimentary light-patch to camera eye took fewer than 400,000 years.
– Coyne, Jerry A. Why Evolution Is True. 2009. Oxford University Press, p. 155.
I’d like to point out here that Coyne’s starry-eyed description of Nilsson and Pelger’s research overlooks a vital point raised by Professor Michael Behe in his article,Molecular Machines: Experimental Support for the Design Inference. Readers will recall that Behe declared:
The relevant steps in biological processes occur ultimately at the molecular level, so a satisfactory explanation of a biological phenomenon such as sight, or digestion, or immunity, must include a molecular explanation. It is no longer sufficient, now that the black box of vision has been opened, for an ‘evolutionary explanation’ of that power to invoke only the anatomical structures of whole eyes, as Darwin did in the 19th century and as most popularizers of evolution continue to do today. Anatomy is, quite simply, irrelevant.”
Nilsson and Pelger’s mathematical calculations addressed the evolution of the eye’s anatomy, but they said nothing about the underlying biochemistry. Using Behe’s criteria, we can see at once that their macroevolutionary model of the evolution of the eye is a failure. Professor James Tour would dismiss it on similar grounds. He would doubtless ask, rhetorically: “Does anyone understand the chemical details behind the macroevolution of the eye?” I hope that Nick Matzke will now concede that this is a reasonable question.
A more skeptical assessment of Nilsson and Pelger’s 1994 paper can be found in an online applied physics thesis by Dov Rhodes, entitled, Approximating the Evolution Time of the Eye: A Genetic Algorithms Approach. The thesis makes for fascinating reading. I shall quote a few brief excerpts:
“A paper published in 1994 by the Swedish scientists Nilsson and Pelger [6] gained immediate worldwide fame for describing the evolution process for an eye, and approximating the time required for an eye to evolve from a simple patch that sense electromagnetic radiation. Nilsson and Pelger (NP) outlined an evolutionary path, where by minute improvements on each step a cameratype eye can evolve in approximately 360,000 years, which is extremely fast on an evolutionary time scale… (p. 1)
The main problem with the NP model is that although the evolutionary path that it describes might be a legitimate one, it neglects consideration for divergent paths. It is easy to construct a situation in which the best temporary option for the improvement of an eye does not lead towards the development of the globally optimal solution. This idea motivates our alternative approach, the method of genetic algorithms. In this paper we use the genetic algorithm with a simplified (2-dimensional) version of NP’s setup and show the error in their approach. We argue that if their approach is mistaken in the simplified model, it is even farther from reality in the full evolutionary setting. (p. 2)
“Although the paraboloid landscape guarantees convergence, the GA is still a probabilistic algorithm and thus will not always converge quickly. As in evolution, the most efficient path is not necessarily the one taken. This fact suggests that our already conservative value of lambda = 5.41 would be even larger if compared with a real deterministic algorithm such as the NP (Nilsson-Pelger) model. Even though their computation accounts to some extent for the average probability of evolutionary development over time, it fails to consider the countless different evolutionary paths, and instead chooses just one.
“Rather than 360 thousand generations, a reasonable lower bound should be at least 5*360,000 = 1.8*10^6 generations, and if our previous speculations have merit, an order of magnitude higher would ramp up the estimate to around 18 million generations. Future experiments that would be useful for improving the accuracy of our results might involve varying the mutation parameter, and most importantly letting algorithms run for longer, allowing the lower bound for convergence to be pushed even higher.” (p. 15)
What Rhodes’ paper demonstrates is that the 1994 estimate by Nilsson and Pelger of how long it took the eye to evolve is more like a case of intelligently guided evolution than Darwinian evolution. As Rhodes puts it: “Even though their computation accounts to some extent for the average probability of evolutionary development over time, it fails to consider the countless different evolutionary paths, and instead chooses just one.”