Search This Blog

Friday 31 July 2015

It's Design all the way down III

The Puzzle of Perfection, Thirty Years On

The invincible scroll II


On Jesus's resurrection body.

After Jesus’ Resurrection, Was His Body Flesh or Spirit?


The Bible’s answer

The Bible says that Jesus “was put to death in the flesh but made alive [resurrected] in the spirit.”1 Peter 3:18; Acts 13:34; 1 Corinthians 15:45;2 Corinthians 5:16.
Jesus’ own words showed that he would not be resurrected with his flesh-and-blood body. He said that he would give his “flesh in behalf of the life of the world,” as a ransom for mankind. (John 6:51; Matthew 20:28) If he had taken back his flesh when he was resurrected, he would have canceled that ransom sacrifice. This could not have happened, though, for the Bible says that he sacrificed his flesh and blood “once for all time.”—Hebrews 9:11, 12.

If Jesus was raised up with a spirit body, how could his disciples see him?

  • Spirit creatures can take on human form. For example, angels who did this in the past even ate and drank with humans. (Genesis 18:1-8; 19:1-3) However, they still were spirit creatures and could leave the physical realm.Judges 13:15-21.
  • After his resurrection, Jesus also assumed human form temporarily, just as angels had previously done. As a spirit creature, though, he was able to appear and disappear suddenly. (Luke 24:31; John 20:19, 26) The fleshly bodies that he materialized were not identical from one appearance to the next. Thus, even Jesus’ close friends recognized him only by what he said or did.Luke 24:30, 31, 35; John 20:14-16; 21:6, 7.
  • When Jesus appeared to the apostle Thomas, he took on a body with wound marks. He did this to bolster Thomas’ faith, since Thomas doubted that Jesus had been raised up.John 20:24-29.

Thursday 30 July 2015

Decanonising Science II

Should we have faith in science? Part II: peer-reviewed science papers
Thursday, February 26, 2015 - 12:51

Kirk Durston



The primary way scientific discoveries and advances are disseminated is through peer-reviewed papers published in scientific journals. The first step is to submit a paper to a journal. Those that survive preliminary filtering by the editor are sent out to be reviewed by qualified scientists in the field. On the basis of the reviewers’ recommendations, the paper is accepted or rejected. Only a fraction of papers submitted for publication make it through this peer-review process and are published.

One would hope that such a process would justify a high level of confidence in scientific publications, but recent findings suggest that our faith in peer-reviewed publications in mainstream journals of science may be on somewhat shaky ground.

The journal Nature, in a paper calling for increased standards in pre-clinical research, revealed that out of 53 papers presenting ‘landmark’ published findings in the field of haematology and oncology, only 6 could be confirmed by subsequent laboratory teams. For the 89% of papers that failed to have their results reproduced, it was found that blind control group analyses was inadequate or data had been selected to support the hypothesis and other data set aside.

Worse still, some of the papers that could not be experimentally reproduced, launched ‘an entire field, with hundreds of secondary publications that expanded on elements of the original observation, but did not actually seek to confirm or falsify its fundamental basis’.

Hundreds of other peer-reviewed, published science papers based on faulty initial papers!

Nature reported in October 2011 that although the number of submissions had increased by 44% over the past ten years, the number of retractions had increased by roughly 900%.

Austin Hughes, in a paper published in the Proceedings of the National Academy of Sciences, focusing on the origin of adaptive phenotypes laments, ‘Thousands of papers are published every year claiming evidence of adaptive evolution on the basis of computational analyses alone, with no evidence whatsoever regarding the phenotypic effects of allegedly adaptive mutations.’ He concludes that ‘This vast outpouring of pseudo-Darwinian hype has been genuinely harmful to the credibility of evolutionary biology as a science.’ Jerry Fodor and Massimo Piattelli-Palmarini write in New Scientist,

"Much of the vast neo-Darwinian literature is distressingly uncritical. The possibility that anything is seriously amiss with Darwin’s account of evolution is hardly considered. … The methodological skepticism that characterizes most areas of scientific discourse seems strikingly absent when Darwinism is the topic."

How can we distinguish the good papers from the poor? This can be very difficult without actually attempting to reproduce their findings. Short of that, apply the same critical thinking skills and healthy skepticism to scientific papers that you do for political, historical or religious claims. 21st century science can often be heavily influenced by poor experimental practices, unproven computational models, political agendas, competition for funding, and scientism (atheism dressed up as science). When going over a paper ask questions like, how large was the data set? What sort of statistical analysis was performed? Are there other papers that independently support or disconfirm these findings? What is not being discussed? One thing for sure, don’t accept something simply because ‘hundreds’ or even ‘thousands’ of papers say so, especially if Darwinian evolution is the topic. Practice critical thinking with the question in the back of your mind, 'Is this one of those papers that will be retracted?'.

Read Part III

Further reading:

http://www.nature.com/nature/journal/v483/n7391/full/483531a.html#t1

http://www.nature.com/news/publishing-the-peer-review-scam-1.16400

http://www.nytimes.com/2012/04/17/science/rise-in-scientific-journal-retractions-prompts-calls-for-reform.html?ref=science

http://dingo.sbs.arizona.edu/~massimo/publications/PDF/JF_MPP_darwinisms...


http://www.nytimes.com/2015/06/16/science/retractions-coming-out-from-under-science-rug.html?

Decanonising Science

Science, Now Under Scrutiny Itself
By BENEDICT CAREYJUNE 15, 2015

The crimes and misdemeanors of science used to be handled mostly in-house, with a private word at the faculty club, barbed questions at a conference, maybe a quiet dismissal. On the rare occasion when a journal publicly retracted a study, it typically did so in a cryptic footnote. Few were the wiser; many retracted studies have been cited as legitimate evidence by others years after the fact.


Retracted Scientific Studies: A Growing List

“Until recently it was unusual for us to report on studies that were not yet retracted,” said Dr. Ivan Oransky, an editor of the blog Retraction Watch, the first news media outlet to report that the study had been challenged. But new technology and a push for transparency from younger scientists have changed that, he said. “We have more tips than we can handle.”

The case has played out against an increase in retractions that has alarmed many journal editors and authors. Scientists in fields as diverse as neurobiology, anesthesia and economics are debating how to reduce misconduct, without creating a police-state mentality that undermines creativity and collaboration.

“It’s an extraordinary time,” said Brian Nosek, a professor of psychology at the University of Virginia, and a founder of the Center for Open Science, which provides a free service through which labs can share data and protocols. “We are now seeing a number of efforts to push for data repositories to facilitate direct replications of findings.”

But that push is not universally welcomed. Some senior scientists have argued that replication often wastes resources. “Isn’t reproducibility the bedrock of science? Yes, up to a point,” the cancer biologist Mina Bissell wrote in a widely circulated blog post. “But it is sometimes much easier not to replicate than to replicate studies,” especially when the group trying to replicate does not have the specialized knowledge or skill to do so.

The experience of Retraction Watch provides a rough guide to where this debate is going and why. Dr. Oransky, who has a medical degree from New York University, and Adam Marcus, both science journalists, discovered a mutual interest in retractions about five years ago and founded the blog as a side project. They had, and still have, day jobs: Mr. Marcus, 46, is the managing editor of Gastroenterology & Endoscopy News, and Dr. Oransky, 42, is the editorial director of MedPage Today (he will take a position as distinguished writer in residence at N.Y.U. later this year).

In its first year, the blog broke a couple of retraction stories that hit the mainstream news media — including a case involving data faked by an anesthesiologist who later served time for health care fraud. The site now has about 150,000 unique visitors a month, about half from outside the United States.

Dr. Oransky and Mr. Marcus are partisans who editorialize sharply against poor oversight and vague retraction notices. But their focus on evidence over accusations distinguishes them from watchdog forerunners who sometimes came off as ad hominem cranks. Last year, their site won a $400,000 grant from the John D. and Catherine T. MacArthur Foundation, to build out their database, and they plan to work with Dr. Nosek to manage the data side.

Their data already tell a story.

The blog has charted a 20 to 25 percent increase in retractions across some 10,000 medical and science journals in the past five years: 500 to 600 a year today from 400 in 2010. (The number in 2001 was 40, according to previous research.) The primary causes of this surge are far from clear. The number of papers published is higher than ever, and journals have proliferated, Dr. Oransky and other experts said. New tools for detecting misconduct, like plagiarism-sifting software, are widely available, so there’s reason to suspect that the surge is a simple product of better detection and larger volume.



The increasing challenges to the veracity of scientists’ work gained widespread attention recently when a study by Michael LaCour on the effect of political canvassing on opinions of same-sex marriage was questioned and ultimately retracted.
Still, the pressure to publish attention-grabbing findings is stronger than ever, these experts said — and so is the ability to “borrow” and digitally massage data. Retraction Watch’s records suggest that about a third of retractions are because of errors, like tainted samples or mistakes in statistics, and about two-thirds are because of misconduct or suspicions of misconduct.

The most common reason for retraction because of misconduct is image manipulation, usually of figures or diagrams, a form of deliberate data massaging or, in some cases, straight plagiarism. In their dissection of the LaCour-Green paper, the two graduate students — David Broockman, now an assistant professor at Stanford, and Joshua Kalla, at California-Berkeley — found that a central figure in Mr. LaCour’s analysis looked nearly identical to one from another study. This and other concerns led Dr. Green, who had not seen any original data, to request a retraction. (Mr. LaCour has denied borrowing anything.)

Data massaging can take many forms. It can mean simply excluding “outliers” — unusually high or low data points — from an analysis to generate findings that more strongly support the hypothesis. It also includes moving the goal posts: that is, mining the data for results first, and then writing the paper as if the experiment had been an attempt to find just those effects. “You have exploratory findings, and you’re pitching them as ‘I knew this all along,’ as confirmatory,” Dr. Nosek said.

The second leading cause is plagiarizing text, followed by republishing — presenting the same results in two or more journals.

The fourth category is faked data. No one knows the rate of fraud with any certainty. In a 2011 survey of more than 2,000 psychologists, about 1 percent admitted to falsifying data. Other studies have estimated a rate of about 2 percent. Yet one offender can do a lot of damage. The Dutch social psychologist Diederik Stapel published dozens of studies in major journals for nearly a decade based on faked data, investigators at the universities where he had worked concluded in 2011. Suspicions were first raised by two of his graduate students.

“If I’m a scientist and I fabricate data and put that online, others are going to assume this is accurate data,” said John Budd, a professor at the University of Missouri and an author of one of the first exhaustive analyses of retractions, in 1999. “There’s no way to know” without inside information.

Here, too, Retraction Watch provides a possible solution. Many of the egregious cases that it posts come from tips. The tipsters are a growing cadre of scientists, specialized journalists and other experts who share the blog’s mission — and are usually not insiders working directly with a suspected offender. One of the blog’s most effective allies has been Dr. Steven Shafer, the current editor of the journal Anesthesia & Analgesia who is now at Stanford, whose aggressiveness in re-examining published papers has led to scores of retractions. The field of anesthesia is a leader in retractions, largely because of Dr. Shafer’s efforts, Mr. Marcus and Dr. Oransky said. (Psychology is another leader, largely because of Dr. Stapel.)

Other cases emerge from issues raised at post-publication sites, where scientists dig into papers, sometimes anonymously. Dr. Broockman, one of the two who challenged the LaCour-Green paper, had first made public some of his suspicions anonymously on a message board called poliscirumors.com. Mr. Marcus said Retraction Watch closely followed a similar site, PubPeer.com. “When it first popped up, a lot of people assumed it would be an ax-grinding place,” he said. “But while some contributors have overstepped, I think it has had a positive impact on the literature.”

What these various tipsters, anonymous post-reviewers and whistle-blowers have in common is a nose for data that looks too good to be true, he said. Sites like Retraction Watch and PubPeer give them a place to discuss their concerns and flag fishy-looking data.

These, along with data repositories like Dr. Nosek’s, may render moot the debate over how to exhaustively replicate findings. That burden is likely to be eased by the community of bad-science bloodhounds who have more and more material to work with when they pick up a foul scent.

“At this point, we see ourselves as part of an ecosystem that is advocating for increased transparency,” Dr. Oransky said. “And that ecosystem is growing.”

Wednesday 29 July 2015

Darwinism vs. the real world IV

Computing the "Best Case" Probability of Proteins from Actual Data, and Falsifying a Prediction of Darwinism

Biological life requires thousands of different protein families, about 70 percent of which are "globular" proteins, with a three-dimensional shape that is unique to each family of proteins. An illustration is shown in the picture at the top of this post. This 3D shape is necessary for a particular biological function and is determined by the sequence of the different amino acids that make up that protein. In other words, it is not biology that determines the shape, but physics. Sequences that produce stable, functional 3D structures are so rare that scientists today do not attempt to find them using random sequence libraries. Instead, they use information they have obtained from reverse-engineering biological proteins to intelligently design artificial proteins.
Indeed, our 21st-century supercomputers are not powerful enough to crunch the variables and locate novel 3D structures. Nonetheless, a foundational prediction of neo-Darwinian theory is that a ploddingly slow evolutionary process consisting of genetic drift, mutations, insertions, and deletions must be able to "find" not just one, but thousands of sequences pre-determined by physics that will have different stable, functional 3D structures. So how does this falsifiable prediction hold up when tested against real data? As ought to be the case in science, I have made my program available so that you can run your own data and verify for yourself the kinds of probabilities these protein families represent.
This program can compute an upper limit for the probability of obtaining a protein family from a wealth of actual data contained in the Pfam database. The first step computes the lower limit for the functional complexity or functional information required to code for a particular protein family, using a method published by Durston et al. This value for I(Ex) can then be plugged into an equation published by Hazen et al. in order to solve the probability M(Ex)/N of "finding" a functional sequence in a single trial.
I downloaded 3,751 aligned sequences for the Ribosomal S7 domain, part of a universal protein essential for all life. When the data was run through the program, it revealed that the lower limit for the amount of functional information required to code for this domain is 332 Fits (Functional Bits). The extreme upper limit for the number of sequences that might be functional for this domain is around 10^92. In a single trial, the probability of obtaining a sequence that would be functional for the Ribosomal S7 domain is 1 chance in 10^100 ... and this is only for a 148 amino acid structural domain, much smaller than an average protein.
For another example, I downloaded 4,986 aligned sequences for the ABC-3 family of proteins and ran it through the program. The results indicate that the probability of obtaining, in a single trial, a functional ABC-3 sequence is around 1 chance in 10^128. This method ignores pairwise and higher order relationships within the sequence that would vastly limit the number of functional sequences by many orders of magnitude, reducing the probability even further by many orders of magnitude -- so this gives us a best-case estimate.
What are the implications of these results, obtained from actual data, for the fundamental prediction of neo-Darwinian theory mentioned above? If we assume 10^30 life forms with a fast replication rate of 30 minutes and a huge genome with a very high mutation rate over a period of 10 billion years, an extreme upper limit for the total number of mutations for all of life's history would be around 10^43. Unfortunately, a protein domain such as Ribosomal S7 would require a minimum average of 10^100 trials, about 10^57 trials more than the entire theoretical history of life could provide -- and this is only for one domain. Forget about "finding" an average sized protein, not to mention thousands.
As we all know from probabilities, you can get lucky once, but not thousands of times. This definitively falsifies the fundamental prediction of Darwinian theory that evolutionary processes can "find" functional protein families. A theory that has an essential prediction thoroughly falsified by the data should have no place in science.
Could natural selection come to the rescue? As we know from genetic algorithms, an evolutionary "search" will only work for hill-climbing problems, not for "needle in a haystack" problems. There are small proteins that require such low levels of functional information to perform simple binding tasks that they form a nice hill-climbing problem that can be easily located in a search. This is not the case, however, for the vast majority of protein families. As real data shows, the probability of finding a functional sequence for one average protein family is so low, there is virtually zero chance of obtaining it anywhere in this universe over its entire history -- never mind finding thousands of protein families.
What are the implications for intelligent design science? A testable, falsifiable hypothesis of intelligent design can be stated as follows:
A unique attribute of an intelligent mind is the ability to produce effects requiring a statistically significant level of functional information.
Given the above testable hypothesis, if we observe an effect that requires a statistically significant level of functional information, we can conclude there is an intelligent mind behind the effect. The average protein family requires a statistically significant level of functional, or prescriptive, information. Therefore, the genomes of life have the fingerprints of an intelligent source all over them.

A line in the sand XVIII




Tom Wolfe calls out the Darwinian Gestapo

In The New Yorker, Tom Wolfe Compares Persecution of Intelligent Design Advocates to the "Spanish Inquisition"

Tuesday 28 July 2015

Origin of life science's blind ally

For the Origin of Life, on Earth or Elsewhere, "Ingredients and Conditions" Aren't Enough
David Klinghoffer July 24, 2015 12:45 PM 


You carefully set out the implements and ingredients on the kitchen counter. Two cans of tuna, bag of egg noodles, block of Cheddar cheese, onion, frozen green peas, condensed cream of mushroom soup, can of sliced mushrooms, a cup of potato chips (for the topping).

Lined up at the ready, a mixing bowl, baking pan, and a pot with water for the noodles. Also a can opener, a grater for the cheese, colander for the pasta, cutting board and knife to chop the onion. Set one burner to high, and the oven to 425 degrees F.

Your family is hungry, but everything is in place! The easy-to-follow recipe gives a prep time of 15 minutes, and 20 more to cook. Of course that's approximate.

Now sit back and relax. How long before these items assemble themselves into a tuna casserole? Pour yourself a glass of wine and watch what happens.

Oh, you're concerned that the stuff has no means of coming together physically? Well, as days pass and you continue to stare intently at your unassembled casserole, perhaps that promised Seattle mega-earthquake comes along and jostles things around.

The cheese collides with the grater. A tuna can knocks into the can opener. The water sloshes in its pot and some gets on the unopened bag of pasta. Throw in a few aftershocks for good measure.

Ridiculous? No more so than stories that are a regular feature of science news that expect incomparably greater wonders to follow automatically when the "ingredients" of life, or some of them, appear to be in place -- whether on a distant, Earth-like exoplanet or on the early Earth itself. This week's pairing comes from NASA and Nature.

NASA reports the discovery of a new world, Kepler-452b some 1,400 light years away, that is seemingly Earth-like in key respects, orbiting in the "habitable zone" around a star like our sun. From Science Daily:

"We can think of Kepler-452b as an older, bigger cousin to Earth, providing an opportunity to understand and reflect upon Earth's evolving environment," said Jon Jenkins, Kepler data analysis lead at NASA's Ames Research Center in Moffett Field, California, who led the team that discovered Kepler-452b. "It's awe-inspiring to consider that this planet has spent 6 billion years in the habitable zone of its star; longer than Earth. That's substantial opportunity for life to arise, should all the necessary ingredients and conditions for life exist on this planet." [Emphasis added.]

Meanwhile on Earth, we're told that the origin of complex life from simpler forms must be even more of a snap than previously assumed. Earlier theorizing said it required a generous infusion of oxygen in the early seas. Now that addition must be seen as more modest. Again, from Science Daily:

If oxygen was a driver of the early evolution of animals, only a slight bump in oxygen levels facilitated it, according to a multi-institutional research team that includes a Virginia Tech geoscientist.

The discovery, published in the journal Nature, calls into question the long held theory that a dramatic change in oxygen levels might have been responsible for the appearance of complicated life forms like whales, sharks, and squids evolving from less complicated life forms, such as microorganisms, algae, and sponges.

The researchers discovered oxygen levels rose in the water and atmosphere, but at lower levels than was thought necessary to trigger life changes.

"We suggest that about 635 million to 542 million years ago, Earth passed some low, but critical, threshold in oxygenation for animals," said Benjamin Gill, an assistant professor of geoscience in the College of Science. "That threshold was in the range of a 10 to 40 percent increase, and was the second time in Earth's history that oxygen levels significantly rose."

Do you follow the logic? If oxygen was "a driver of the early evolution of animals," then only a "slight bump" was needed since that's all that was available.

We've said many times before that whether on our planet or any other, "ingredients and conditions" fall wildly short of being enough to explain the development of life from non-life, or complex from simple.

ENV observed recently:

Visualize an exoplanet far away: dynamic, comfortable, yet lifeless. It has water, plate tectonics, volcanoes, an atmosphere and all the ingredients for life -- but no life. What would be the primary factor distinguishing it from Earth? A new paper in PLOS Biology suggests that its chief drawback, all things being equal, would be a lack of complex specified information.

As for the oxygen idea, that's hopeless. It isn't merely oxygen, but information, that's needed. From our post "Cambrian Animals? Just Add Oxygen":

Once again, we see Darwinists dodging the main problem with the Cambrian explosion: the sudden appearance of biological information necessary to build tissues, organs, limbs, eyes, systems, and body plans. This is the focus of most of Part II of Stephen Meyer's book Darwin's Doubt. Mystically, they imagine animals as eager to evolve but, like racehorses at the gates, held back by environmental barriers.

Actually, that tuna casserole stands a better chance than either of these notions -- expecting life based on "ingredients and conditions" -- since at least the recipe is known. Identifying the ingredients and lining them up in a working kitchen is different from knowing how they're supposed to come together. If life has a recipe, we are utterly ignorant of what that might be, otherwise we would have sparked life ourselves in a laboratory by now.

Your casserole is a complex structure, in the sense of being an unlikely assemblage, but it is also specified or functional. (The function is to serve as a tasty and nutritious meal, more so than the unprepared ingredients.) So too with the structures of life, which in addition give evidence of irreducible complexity.


If you're hungry now, do you think it's only a matter of time before the table can be set and the food served? With these science news items, that is the level of absurdity we're talking about.