Search This Blog

Saturday, 1 August 2015

Darwinism vs. The real world. V

Cardiovascular Function: What Happens When Real Numbers Are Wrong?


Howard Glicksman July 31, 2015 1:57 PM





Editor's note: Physicians have a special place among the thinkers who have elaborated the argument for intelligent design. Perhaps that's because, more than evolutionary biologists, they are familiar with the challenges of maintaining a functioning complex system, the human body. With that in mind, Evolution News & Views is delighted to present this series, "The Designed Body." Dr. Glicksman practices palliative medicine for a hospice organization.
the-designed-body4.jpgDue to the laws of nature, the body must have enough energy for its trillions of cells to work properly. The body won't function very well if its cells don't have enough oxygen (O2).
Evolutionary biologists claim that the organs described so far in this series, and the systems that control them, must have come about by chance and the laws of nature alone. But their theory seems to account only for how life looks and not how it actually works to stay alive under the laws of nature. Experience teaches that real numbers have real consequences when it comes to life and death.
Based on what we know about how the body actually works, our earliest ancestors had to be able to provide at least 3,500 mL/min of O2, mainly to their muscles and heart, to be able to run fast enough and fight hard enough to win the battle for survival. That would have required their lungs to have a rapid enough airflow, a large enough volume, and an efficient enough gas exchange to bring in enough O2. It would also have required that they have enough iron to make enough hemoglobin to be able to carry enough Oin the blood. And finally, their cardiac output (CO) needed to be at least 25 L/min to sustain the kind of activity levels needed to hunt rather than be hunted.
As I've noted in previous articles, lung function, hemoglobin production, and cardiac output are controlled by irreducibly complex systems, each consisting of sensors, integrators, and effectors that must also inherently know what is needed for survival. I call this natural survival capacity, because the systems involved must naturally have the capacity to keep a specific chemical or physical parameter within a certain range to allow for survival. We have looked at what happens to the body when its lung function and hemoglobin production do not measure up to what is needed. Now we will start to look at what happens when cardiac function is not up to snuff.
At rest the average male needs about 250 mL/min of Oto keep all of his organs working properly, and any increase in activity requires more. Walking slowly requires 500 mL/min of O2; walking quickly, 1,000 mL/min; moderate jogging, 2,000 mL/min; and fast running, 3,500 mL/min of O2. Since we know that the CO has to be at least 25 L/min for maximum activity, we can figure out what the minimum CO levels would have to be for lesser activity levels. We can do this by multiplying 25 L/min by the ratio of the lower and maximum Oconsumption. So to jog at a moderate pace, the minimum CO would have to be 25 x 2,000/3500 = 14.3 L/min. To walk quickly would take at least a CO of 7.2 L/min, to walk slowly a CO of 3.6 L/min, and to stay at rest would need a CO of 1.8 L/min.
It is important to note here that these are real numbers that reflect real life and the laws of nature. No matter what evolutionary biologists say about how matter must have organized itself into the complex systems we know are needed for life, medical science tells you that if you don't have a CO of 7.2 L/min, you can't walk very quickly, if you don't have a CO of 3.6 L/min, it would be very difficult for you to walk even slowly, and if you don't have a CO of at least 1.8 L/min, you are probably dead. Certain parameters of cardiac function had to be met for our ancestors to survive within the laws of nature, and no expenditure of imaginary effort can deny this fact.
When real numbers lead to chronic debility with respect to the lungs, they usually involve problems with ventilation and/or gas exchange. But when dealing with chronic debility and the heart, we usually encounter four different problems. Each condition, individually, is capable of causing significant debility but, in real life, they often occur in combination. Just as adding a gas exchange problem to a ventilation problem can more quickly lead to worsening debility from pulmonary dysfunction, so too a combination of more than one of these cardiac conditions can quickly lead to significant weakness and a limited ability to be active and manage independently.
The commonest heart condition in developed countries, and what most people think of when they hear someone has heart trouble, is coronary artery disease. Even though the heart pumps blood throughout the body, it must also supply adequate blood flow to itself so it can do its job. As the blood flows out of the left ventricle through the aortic valve, the coronary arteries turn back over the surface of the heart. The heart is the hardest working muscle in the body and when its blood supply is compromised this can lead to significant debility and even death.
Another common cardiac condition is valvular heart disease. The "V" shaped one-way valves between the atria and the ventricles and the ventricles and their outflow tracts are structured in a way that allows them, when open, to facilitate the forward movement of blood, and when closed, to prevent blood from going backward. The efficiency of cardiac function is dependent not only on adequate coronary blood flow, but also on properly working valves. A valve can't be too tight, slowing forward blood flow, or too lax, allowing backward flow.
When the heart cannot meet the metabolic needs of the body it is said to be in heart failure. This third common cardiac condition can involve either the left or the right side of the heart alone, or both at the same time. In addition, left ventricular failure can be systolic, where reduced muscle contractility leads to weaker pumping action, and/or diastolic, where increased muscle stiffness reduces relaxation and the filling of the ventricle with blood. Both coronary artery and valvular heart disease are common causes of heart failure.
The fourth common cardiac condition is the cardiac arrhythmias. It is the sino-atrial node, the natural pacemaker in the right atrium that dominates the other excitable cells in the heart. It controls the heart rate and starts coordinated atrial contraction and the conducting system makes sure that coordinated ventricular contraction takes place soon afterward. Any disruption or short-circuiting of this signal formation, impulse conduction or coordinated muscle contraction can lead to significant debility and even death. All three of the conditions mentioned above, and other disorders, can predispose the heart to cardiac arrhythmias.
When real numbers lead to functional problems of the heart, these are usually the four main conditions that contribute to the situation. In the next few articles we will take a closer look at each of them.

Friday, 31 July 2015

It's Design all the way down III

The Puzzle of Perfection, Thirty Years On

The invincible scroll II


On Jesus's resurrection body.

After Jesus’ Resurrection, Was His Body Flesh or Spirit?


The Bible’s answer

The Bible says that Jesus “was put to death in the flesh but made alive [resurrected] in the spirit.”1 Peter 3:18; Acts 13:34; 1 Corinthians 15:45;2 Corinthians 5:16.
Jesus’ own words showed that he would not be resurrected with his flesh-and-blood body. He said that he would give his “flesh in behalf of the life of the world,” as a ransom for mankind. (John 6:51; Matthew 20:28) If he had taken back his flesh when he was resurrected, he would have canceled that ransom sacrifice. This could not have happened, though, for the Bible says that he sacrificed his flesh and blood “once for all time.”—Hebrews 9:11, 12.

If Jesus was raised up with a spirit body, how could his disciples see him?

  • Spirit creatures can take on human form. For example, angels who did this in the past even ate and drank with humans. (Genesis 18:1-8; 19:1-3) However, they still were spirit creatures and could leave the physical realm.Judges 13:15-21.
  • After his resurrection, Jesus also assumed human form temporarily, just as angels had previously done. As a spirit creature, though, he was able to appear and disappear suddenly. (Luke 24:31; John 20:19, 26) The fleshly bodies that he materialized were not identical from one appearance to the next. Thus, even Jesus’ close friends recognized him only by what he said or did.Luke 24:30, 31, 35; John 20:14-16; 21:6, 7.
  • When Jesus appeared to the apostle Thomas, he took on a body with wound marks. He did this to bolster Thomas’ faith, since Thomas doubted that Jesus had been raised up.John 20:24-29.

Thursday, 30 July 2015

Decanonising Science II

Should we have faith in science? Part II: peer-reviewed science papers
Thursday, February 26, 2015 - 12:51

Kirk Durston



The primary way scientific discoveries and advances are disseminated is through peer-reviewed papers published in scientific journals. The first step is to submit a paper to a journal. Those that survive preliminary filtering by the editor are sent out to be reviewed by qualified scientists in the field. On the basis of the reviewers’ recommendations, the paper is accepted or rejected. Only a fraction of papers submitted for publication make it through this peer-review process and are published.

One would hope that such a process would justify a high level of confidence in scientific publications, but recent findings suggest that our faith in peer-reviewed publications in mainstream journals of science may be on somewhat shaky ground.

The journal Nature, in a paper calling for increased standards in pre-clinical research, revealed that out of 53 papers presenting ‘landmark’ published findings in the field of haematology and oncology, only 6 could be confirmed by subsequent laboratory teams. For the 89% of papers that failed to have their results reproduced, it was found that blind control group analyses was inadequate or data had been selected to support the hypothesis and other data set aside.

Worse still, some of the papers that could not be experimentally reproduced, launched ‘an entire field, with hundreds of secondary publications that expanded on elements of the original observation, but did not actually seek to confirm or falsify its fundamental basis’.

Hundreds of other peer-reviewed, published science papers based on faulty initial papers!

Nature reported in October 2011 that although the number of submissions had increased by 44% over the past ten years, the number of retractions had increased by roughly 900%.

Austin Hughes, in a paper published in the Proceedings of the National Academy of Sciences, focusing on the origin of adaptive phenotypes laments, ‘Thousands of papers are published every year claiming evidence of adaptive evolution on the basis of computational analyses alone, with no evidence whatsoever regarding the phenotypic effects of allegedly adaptive mutations.’ He concludes that ‘This vast outpouring of pseudo-Darwinian hype has been genuinely harmful to the credibility of evolutionary biology as a science.’ Jerry Fodor and Massimo Piattelli-Palmarini write in New Scientist,

"Much of the vast neo-Darwinian literature is distressingly uncritical. The possibility that anything is seriously amiss with Darwin’s account of evolution is hardly considered. … The methodological skepticism that characterizes most areas of scientific discourse seems strikingly absent when Darwinism is the topic."

How can we distinguish the good papers from the poor? This can be very difficult without actually attempting to reproduce their findings. Short of that, apply the same critical thinking skills and healthy skepticism to scientific papers that you do for political, historical or religious claims. 21st century science can often be heavily influenced by poor experimental practices, unproven computational models, political agendas, competition for funding, and scientism (atheism dressed up as science). When going over a paper ask questions like, how large was the data set? What sort of statistical analysis was performed? Are there other papers that independently support or disconfirm these findings? What is not being discussed? One thing for sure, don’t accept something simply because ‘hundreds’ or even ‘thousands’ of papers say so, especially if Darwinian evolution is the topic. Practice critical thinking with the question in the back of your mind, 'Is this one of those papers that will be retracted?'.

Read Part III

Further reading:

http://www.nature.com/nature/journal/v483/n7391/full/483531a.html#t1

http://www.nature.com/news/publishing-the-peer-review-scam-1.16400

http://www.nytimes.com/2012/04/17/science/rise-in-scientific-journal-retractions-prompts-calls-for-reform.html?ref=science

http://dingo.sbs.arizona.edu/~massimo/publications/PDF/JF_MPP_darwinisms...


http://www.nytimes.com/2015/06/16/science/retractions-coming-out-from-under-science-rug.html?

Decanonising Science

Science, Now Under Scrutiny Itself
By BENEDICT CAREYJUNE 15, 2015

The crimes and misdemeanors of science used to be handled mostly in-house, with a private word at the faculty club, barbed questions at a conference, maybe a quiet dismissal. On the rare occasion when a journal publicly retracted a study, it typically did so in a cryptic footnote. Few were the wiser; many retracted studies have been cited as legitimate evidence by others years after the fact.


Retracted Scientific Studies: A Growing List

“Until recently it was unusual for us to report on studies that were not yet retracted,” said Dr. Ivan Oransky, an editor of the blog Retraction Watch, the first news media outlet to report that the study had been challenged. But new technology and a push for transparency from younger scientists have changed that, he said. “We have more tips than we can handle.”

The case has played out against an increase in retractions that has alarmed many journal editors and authors. Scientists in fields as diverse as neurobiology, anesthesia and economics are debating how to reduce misconduct, without creating a police-state mentality that undermines creativity and collaboration.

“It’s an extraordinary time,” said Brian Nosek, a professor of psychology at the University of Virginia, and a founder of the Center for Open Science, which provides a free service through which labs can share data and protocols. “We are now seeing a number of efforts to push for data repositories to facilitate direct replications of findings.”

But that push is not universally welcomed. Some senior scientists have argued that replication often wastes resources. “Isn’t reproducibility the bedrock of science? Yes, up to a point,” the cancer biologist Mina Bissell wrote in a widely circulated blog post. “But it is sometimes much easier not to replicate than to replicate studies,” especially when the group trying to replicate does not have the specialized knowledge or skill to do so.

The experience of Retraction Watch provides a rough guide to where this debate is going and why. Dr. Oransky, who has a medical degree from New York University, and Adam Marcus, both science journalists, discovered a mutual interest in retractions about five years ago and founded the blog as a side project. They had, and still have, day jobs: Mr. Marcus, 46, is the managing editor of Gastroenterology & Endoscopy News, and Dr. Oransky, 42, is the editorial director of MedPage Today (he will take a position as distinguished writer in residence at N.Y.U. later this year).

In its first year, the blog broke a couple of retraction stories that hit the mainstream news media — including a case involving data faked by an anesthesiologist who later served time for health care fraud. The site now has about 150,000 unique visitors a month, about half from outside the United States.

Dr. Oransky and Mr. Marcus are partisans who editorialize sharply against poor oversight and vague retraction notices. But their focus on evidence over accusations distinguishes them from watchdog forerunners who sometimes came off as ad hominem cranks. Last year, their site won a $400,000 grant from the John D. and Catherine T. MacArthur Foundation, to build out their database, and they plan to work with Dr. Nosek to manage the data side.

Their data already tell a story.

The blog has charted a 20 to 25 percent increase in retractions across some 10,000 medical and science journals in the past five years: 500 to 600 a year today from 400 in 2010. (The number in 2001 was 40, according to previous research.) The primary causes of this surge are far from clear. The number of papers published is higher than ever, and journals have proliferated, Dr. Oransky and other experts said. New tools for detecting misconduct, like plagiarism-sifting software, are widely available, so there’s reason to suspect that the surge is a simple product of better detection and larger volume.



The increasing challenges to the veracity of scientists’ work gained widespread attention recently when a study by Michael LaCour on the effect of political canvassing on opinions of same-sex marriage was questioned and ultimately retracted.
Still, the pressure to publish attention-grabbing findings is stronger than ever, these experts said — and so is the ability to “borrow” and digitally massage data. Retraction Watch’s records suggest that about a third of retractions are because of errors, like tainted samples or mistakes in statistics, and about two-thirds are because of misconduct or suspicions of misconduct.

The most common reason for retraction because of misconduct is image manipulation, usually of figures or diagrams, a form of deliberate data massaging or, in some cases, straight plagiarism. In their dissection of the LaCour-Green paper, the two graduate students — David Broockman, now an assistant professor at Stanford, and Joshua Kalla, at California-Berkeley — found that a central figure in Mr. LaCour’s analysis looked nearly identical to one from another study. This and other concerns led Dr. Green, who had not seen any original data, to request a retraction. (Mr. LaCour has denied borrowing anything.)

Data massaging can take many forms. It can mean simply excluding “outliers” — unusually high or low data points — from an analysis to generate findings that more strongly support the hypothesis. It also includes moving the goal posts: that is, mining the data for results first, and then writing the paper as if the experiment had been an attempt to find just those effects. “You have exploratory findings, and you’re pitching them as ‘I knew this all along,’ as confirmatory,” Dr. Nosek said.

The second leading cause is plagiarizing text, followed by republishing — presenting the same results in two or more journals.

The fourth category is faked data. No one knows the rate of fraud with any certainty. In a 2011 survey of more than 2,000 psychologists, about 1 percent admitted to falsifying data. Other studies have estimated a rate of about 2 percent. Yet one offender can do a lot of damage. The Dutch social psychologist Diederik Stapel published dozens of studies in major journals for nearly a decade based on faked data, investigators at the universities where he had worked concluded in 2011. Suspicions were first raised by two of his graduate students.

“If I’m a scientist and I fabricate data and put that online, others are going to assume this is accurate data,” said John Budd, a professor at the University of Missouri and an author of one of the first exhaustive analyses of retractions, in 1999. “There’s no way to know” without inside information.

Here, too, Retraction Watch provides a possible solution. Many of the egregious cases that it posts come from tips. The tipsters are a growing cadre of scientists, specialized journalists and other experts who share the blog’s mission — and are usually not insiders working directly with a suspected offender. One of the blog’s most effective allies has been Dr. Steven Shafer, the current editor of the journal Anesthesia & Analgesia who is now at Stanford, whose aggressiveness in re-examining published papers has led to scores of retractions. The field of anesthesia is a leader in retractions, largely because of Dr. Shafer’s efforts, Mr. Marcus and Dr. Oransky said. (Psychology is another leader, largely because of Dr. Stapel.)

Other cases emerge from issues raised at post-publication sites, where scientists dig into papers, sometimes anonymously. Dr. Broockman, one of the two who challenged the LaCour-Green paper, had first made public some of his suspicions anonymously on a message board called poliscirumors.com. Mr. Marcus said Retraction Watch closely followed a similar site, PubPeer.com. “When it first popped up, a lot of people assumed it would be an ax-grinding place,” he said. “But while some contributors have overstepped, I think it has had a positive impact on the literature.”

What these various tipsters, anonymous post-reviewers and whistle-blowers have in common is a nose for data that looks too good to be true, he said. Sites like Retraction Watch and PubPeer give them a place to discuss their concerns and flag fishy-looking data.

These, along with data repositories like Dr. Nosek’s, may render moot the debate over how to exhaustively replicate findings. That burden is likely to be eased by the community of bad-science bloodhounds who have more and more material to work with when they pick up a foul scent.

“At this point, we see ourselves as part of an ecosystem that is advocating for increased transparency,” Dr. Oransky said. “And that ecosystem is growing.”

Wednesday, 29 July 2015

Darwinism vs. the real world IV

Computing the "Best Case" Probability of Proteins from Actual Data, and Falsifying a Prediction of Darwinism

Biological life requires thousands of different protein families, about 70 percent of which are "globular" proteins, with a three-dimensional shape that is unique to each family of proteins. An illustration is shown in the picture at the top of this post. This 3D shape is necessary for a particular biological function and is determined by the sequence of the different amino acids that make up that protein. In other words, it is not biology that determines the shape, but physics. Sequences that produce stable, functional 3D structures are so rare that scientists today do not attempt to find them using random sequence libraries. Instead, they use information they have obtained from reverse-engineering biological proteins to intelligently design artificial proteins.
Indeed, our 21st-century supercomputers are not powerful enough to crunch the variables and locate novel 3D structures. Nonetheless, a foundational prediction of neo-Darwinian theory is that a ploddingly slow evolutionary process consisting of genetic drift, mutations, insertions, and deletions must be able to "find" not just one, but thousands of sequences pre-determined by physics that will have different stable, functional 3D structures. So how does this falsifiable prediction hold up when tested against real data? As ought to be the case in science, I have made my program available so that you can run your own data and verify for yourself the kinds of probabilities these protein families represent.
This program can compute an upper limit for the probability of obtaining a protein family from a wealth of actual data contained in the Pfam database. The first step computes the lower limit for the functional complexity or functional information required to code for a particular protein family, using a method published by Durston et al. This value for I(Ex) can then be plugged into an equation published by Hazen et al. in order to solve the probability M(Ex)/N of "finding" a functional sequence in a single trial.
I downloaded 3,751 aligned sequences for the Ribosomal S7 domain, part of a universal protein essential for all life. When the data was run through the program, it revealed that the lower limit for the amount of functional information required to code for this domain is 332 Fits (Functional Bits). The extreme upper limit for the number of sequences that might be functional for this domain is around 10^92. In a single trial, the probability of obtaining a sequence that would be functional for the Ribosomal S7 domain is 1 chance in 10^100 ... and this is only for a 148 amino acid structural domain, much smaller than an average protein.
For another example, I downloaded 4,986 aligned sequences for the ABC-3 family of proteins and ran it through the program. The results indicate that the probability of obtaining, in a single trial, a functional ABC-3 sequence is around 1 chance in 10^128. This method ignores pairwise and higher order relationships within the sequence that would vastly limit the number of functional sequences by many orders of magnitude, reducing the probability even further by many orders of magnitude -- so this gives us a best-case estimate.
What are the implications of these results, obtained from actual data, for the fundamental prediction of neo-Darwinian theory mentioned above? If we assume 10^30 life forms with a fast replication rate of 30 minutes and a huge genome with a very high mutation rate over a period of 10 billion years, an extreme upper limit for the total number of mutations for all of life's history would be around 10^43. Unfortunately, a protein domain such as Ribosomal S7 would require a minimum average of 10^100 trials, about 10^57 trials more than the entire theoretical history of life could provide -- and this is only for one domain. Forget about "finding" an average sized protein, not to mention thousands.
As we all know from probabilities, you can get lucky once, but not thousands of times. This definitively falsifies the fundamental prediction of Darwinian theory that evolutionary processes can "find" functional protein families. A theory that has an essential prediction thoroughly falsified by the data should have no place in science.
Could natural selection come to the rescue? As we know from genetic algorithms, an evolutionary "search" will only work for hill-climbing problems, not for "needle in a haystack" problems. There are small proteins that require such low levels of functional information to perform simple binding tasks that they form a nice hill-climbing problem that can be easily located in a search. This is not the case, however, for the vast majority of protein families. As real data shows, the probability of finding a functional sequence for one average protein family is so low, there is virtually zero chance of obtaining it anywhere in this universe over its entire history -- never mind finding thousands of protein families.
What are the implications for intelligent design science? A testable, falsifiable hypothesis of intelligent design can be stated as follows:
A unique attribute of an intelligent mind is the ability to produce effects requiring a statistically significant level of functional information.
Given the above testable hypothesis, if we observe an effect that requires a statistically significant level of functional information, we can conclude there is an intelligent mind behind the effect. The average protein family requires a statistically significant level of functional, or prescriptive, information. Therefore, the genomes of life have the fingerprints of an intelligent source all over them.

A line in the sand XVIII




Tom Wolfe calls out the Darwinian Gestapo

In The New Yorker, Tom Wolfe Compares Persecution of Intelligent Design Advocates to the "Spanish Inquisition"