the bible,truth,God's kingdom,Jehovah God,New World,Jehovah's Witnesses,God's church,Christianity,apologetics,spirituality.
Sunday, 28 December 2014
Tuesday, 23 December 2014
Sunday, 21 December 2014
Friday, 19 December 2014
On Darwinism's attempts to put words in our mouths.
Leading Evolutionary Scientists Admit We Have No Evolutionary Explanation of Human Language
Casey Luskin December 19, 2014 3:06 AM
Denyse O'Leary has written here about the difficulty that evolutionary psychology faces in explaining the origin of language. Indeed, back in May, a group of huge names in evolutionary biology, evolutionary anthropology, and evolutionary psychology published a peer-reviewed paper in the journal Frontiers in Psychology admitting that in fact we have no explanation for the origin of language. The abstract strikingly states:
Understanding the evolution of language requires evidence regarding origins and processes that led to change. In the last 40 years, there has been an explosion of research on this problem as well as a sense that considerable progress has been made. We argue instead that the richness of ideas is accompanied by a poverty of evidence, with essentially no explanation of how and why our linguistic computations and representations evolved. We show that, to date, (1) studies of nonhuman animals provide virtually no relevant parallels to human linguistic communication, and none to the underlying biological capacity; (2) the fossil and archaeological evidence does not inform our understanding of the computations and representations of our earliest ancestors, leaving details of origins and selective pressure unresolved; (3) our understanding of the genetics of language is so impoverished that there is little hope of connecting genes to linguistic processes anytime soon; (4) all modeling attempts have made unfounded assumptions, and have provided no empirical tests, thus leaving any insights into language's origins unverifiable. Based on the current state of evidence, we submit that the most fundamental questions about the origins and evolution of our linguistic capacity remain as mysterious as ever, with considerable uncertainty about the discovery of either relevant or conclusive evidence that can adjudicate among the many open hypotheses.
(Marc Hauser, Charles Yang, Robert Berwick, Ian Tattersall, Michael J. Ryan, Jeffrey Watumull, Noam Chomsky and Richard C. Lewontin, "The mystery of language evolution," Frontiers in Psychology, Vol 5:401 (May 7, 2014) (emphases added).)
(Marc Hauser, Charles Yang, Robert Berwick, Ian Tattersall, Michael J. Ryan, Jeffrey Watumull, Noam Chomsky and Richard C. Lewontin, "The mystery of language evolution," Frontiers in Psychology, Vol 5:401 (May 7, 2014) (emphases added).)
It's difficult to imagine much stronger words from a more prestigious collection of experts. But what about all of those news stories about apes who learn how to communicate using sign language? Do they show that apes possess or can learn some primitive precursor to humanlike language? No, they say:
Talking birds and signing apes rank among the most fantastic claims in the literature on language evolution, but examination of the evidence shows fundamental differences between child language acquisition and nonhuman species' use of language and language-like systems. For instance, dogs can respond to a few hundred words, but only after thousands of hours of training; children acquire words rapidly and spontaneously generalize their usage in a wide range of contexts. Similarly, Nim Chimpsky, the chimpanzee that produced the only public corpus of data in all animal language studies, produced signs considerably below the expected degree of combinatorial diversity seen in two-year old children, and with no understanding of syntactic structure or semantic interpretation. Though these studies are of potential interest to understanding the acquisition of specialized, artificial skills -- akin to our learning a computer language -- they do not inform understanding of language evolution.
They conclude: "For now, the evidence from comparative animal behavior provides little insight into how our language phenotype evolved. The gap between us and them is simply too great to provide any understanding of evolutionary precursors or the evolutionary processes (e.g., selection) that led to change over time."
But what about FOXP2 -- a gene that seems to be connected to language, where humans have a couple of unique amino acid differences compared to nonhuman primates? In The Language of God, Francis Collins presented FOXP2 as something of a miracle mutation that could have caused human language to develop (see pp. 139-141). Time Magazine once claimed that our two amino acid differences in this gene could have caused "the emergence of all aspects of human speech, from a baby's first words to a Robin Williams monologue." These authors would disagree, because, as they point out, we aren't even sure exactly how FOXP2 affects language:
FOXP2 is a transcription factor that up- or down-regulates DNA in many different tissue types (brain, lung, gut lining) at different times during development as well as throughout life. This broad functional effect makes evolutionary analysis difficult. In particular, the exact mechanisms by which FOXP2 mutations disrupt speech remain uncertain, variously posited as disruptions in motor articulation/serialization in speech, vocal learning generally, or broader difficulties with procedural serialization. This is critical because FOXP2 mutations may disrupt only the input/output systems of language, sparing the more internal computations of human language syntax or semantics; or it may be that FOXP2 affects general cognitive processing, such as general serial ordering of procedures. Second, it is not clear whether the amino acid changes distinguishing FOXP2 in humans and nonhumans represent adaptations "for" language, since their functional effects remain unclear. One of the two protein-coding changes along the lineage to modern humans is also associated with the order Carnivora. Since FOXP2 also targets the gut lining, this evolutionary step may have had little to do directly with language but instead with digestion modifications driven by forest-to-savannah habit and so dietary change...
According to the paper, claims that mutations in FOXP2 explain the origin of language cannot be true because, as the authors put it, "we lack a connect-the-dots account of any gene to language phenotype."
Of course the authors propose various avenues of research that they think might someday lead to an understanding how language arose. But for the present, their analysis is discouraging: "Until such evidence is brought forward, understanding of language evolution will remain one of the great mysteries of our species."
Monday, 15 December 2014
But you already knew that.
Your Computer Doesn't Like You
Michael Egnor December 15, 2014 2:30 AM
Actually, your computer doesn't dislike you, either. Your computer has no opinion about you at all, because it has no opinions whatsoever.
This is news to Stephen Hawking and Elon Musk, who -- as Erik J. Larson has commented here
-- recently have warned humanity that computers are on the verge of
acquiring minds and could take over the world and end mankind.
Computers, of course, cannot "take over the world and end mankind,"
because computers have no intelligent agency at all. Intelligence, as
denoted in "artificial intelligence," corresponds roughly to what
Aristotle meant by intellect and will. Intellect and will are the
rational capabilities of human beings -- the ability to reason, to
contemplate universals such as good and evil and right and wrong, to
love and hate, to judge and intend and carry out decisions arrived at
through reason. These are capabilities of human beings, and only of
human beings.
Inanimate devices have agency too, but they have unintelligent
agency. Computers can store electrons, move electrons about, light up a
screen, boot up, crash, freeze, and so on. Computers can of course be a
tool by which human beings express their own human intelligent
agency. When a person commits bank fraud via a computer, the person,
not the computer, goes to jail. Computers have no intelligent agency of
their own, and never will, any more than the paperweight on your desk
has intelligent agency.
The only way a computer can hurt you, on its own, is if it falls on your foot.
Computers
are electromechanical devices that we use as tools. They differ only in
complexity from other tools like books, which we use to store and
retrieve representations of knowledge. We make tools, and we use tools,
and they serve our ends. We put representations of our intentions and
knowledge and desires and memories and conceptual insights and errors
into computers, and the software that we have written maps our inputs to
outputs, and then we analyze and ponder the outputs. Nowhere in this
process is there the slightest bit of thinking on the part of
the computer. Computers can't think because things like tools -- even
tools made in Silicon Valley -- can't think. Computers are devices we
use for our own purposes, and like all devices, sometimes the
consequences aren't what we expected. Sometimes the book really changes
the way we think about things, and sometimes we drop the book on our
foot. But the consequences of using tools -- and the consequences can on
occasion be transformative for humanity -- are consequences entirely of
human purposes and mistakes.
We've been through this before. After the invention of writing in
Sumer, parchment didn't acquire a mind and inflict evil on humanity. But
writing did change civilization. After the invention of the printing
press, books didn't acquire a mind and inflict evil upon humanity. But
the printing press did change civilization. Nor will computers in the
21st century acquire a mind and inflict evil on humanity, because
computers can't think any more than parchment or books can think.
But the information age will change civilization.
The salient harm that the silly "artificial intelligence" trope will
do to humanity, aside from the general stupidity the concept fosters, is
that it will distract us from the astonishingly potent transformation
of our civilization that we will bring about in the information
revolution. The transformation will be much more radical and rapid than
the transformation in the 15th century caused by the printing press.
Within a century or two after Gutenberg, millions of people had read
things they had never read before, and thought of things they had never
thought of before, and doubted and believed new things and found new
ways to change their lives and their cultures. The Renaissance flowered,
the Reformation raged, the Enlightenment (however misnamed) bloomed,
and modernity dawned.
By 1648 northern and central Europe was bled white and a third of the
population of Germany was dead from famine and war. By 1789 Napoleon
was studying his schoolbooks. By 1867 Marx had a publisher for Das Kapital, and by 1925 Hitler published volume one of Mein Kampf.
Parchment and books and computers are the tools -- merely the tools -- by which humanity transforms itself.
The information revolution will leverage human intentions and
mistakes in ways we can only begin to imagine. None of the
transformation will have anything to do with science fiction stories
about malevolent robots. It's the malevolent humans -- and even the
well-intentioned humans -- who will fashion our ends
Artificial intelligence is an oxymoron. Only human beings have intelligence. We use tools to bring about our ends, and the human
information revolution made possible by our tools will transform our
civilization, for better or worse and probably both. But the only real
threat "artificial intelligence" poses is that it disposes us to dread HAL
when we should be contemplating the transformation -- a transformation
far more fundamental and astonishing than writing or the printing press
-- that humanity will bring upon itself via the information revolution.
René Girard has a few thoughts about what we do to ourselves.
Saturday, 6 December 2014
The divine law and bloodVIII:swimming against the flow.
From the spring 2013 edition of "stanford Medicine"
AGAINST THE FLOW
WHAT’S BEHIND THE DECLINE IN BLOOD TRANSFUSIONS?
by Sarah C.P. Williams
Illustration by Jonathon Rosen
One day in 2011, an ambulance pulled up to the Stanford emergency room and paramedics unloaded a man in his 30s who had crashed his motorcycle. He was in critical condition: Tests showed dangerously low blood pressure, indicating that around 40 percent of his blood was lost. And an ultrasound revealed that the blood was collecting in his belly, suggesting that one or more of his abdominal organs was the source of the blood loss.
Paul Maggio, MD, a trauma surgeon and co-director of critical care medicine at Stanford Hospital & Clinics, sped the patient into the operating room. But he made sure that the technicians prepping his operating room took the time to set up one key piece of equipment, called an intraoperative cell salvage device, which is now commonly used in trauma cases. As the patient lay on the operating table and Maggio made the first cuts into his abdomen, suction devices slurped up the loose blood, directing it away from the surgery site through tubes. But instead of leading to a container bound for disposal, the tubes led to the salvage device.
The ATM-sized machine spun the blood to separate its components, cleaned it of any debris that had been suctioned up from the abdomen and sent it back out into fresh bags. From there, the blood was shunted right back to the patient’s body, through intravenous tubes poking into his veins. The cell salvage device has been around for decades, but only recently has evidence emerged that autotransfusion — giving patients their own blood instead of blood from donors — leads to better surgery outcomes. As a result, the use of the machines has gone from extremely rare to commonplace. Today, hospitals that have the machines use them in many scheduled abdominal and heart surgeries and routinely in trauma cases involving massive bleeding.
“Autotransfusing this patient spared him from getting more banked donor blood and from all the risks associated with it,” says Maggio of the motorcycle crash victim. He turned out to have an injury to his spleen, which Maggio repaired. In all, around 2 liters of blood were collected from the patient’s abdomen, processed through the salvage device, and transfused back into his body.
Blood transfusions involve routing a needle into one of a patient’s veins — most often in an arm — and attaching a thin tube to the needle. Blood flows through the tube directly into the patient’s blood vessels. Ten years ago, a patient like Maggio’s would most likely have had a transfusion of blood donated by volunteers at the Stanford Blood Center. But over the past decade, a growing body of research has revealed that in hospitals around the world, donated blood is used more often, and in larger quantities, than is needed to help patients — both in operating rooms and hospital wards.
Some of the research has been conducted by physicians working with patients who refuse donated blood on religious grounds; other findings have come from the front lines of the war in Afghanistan, where blood is hard to transport; and some studies have been inspired simply by the rising cost of blood and a desire to save resources. Some findings are new, and others, like studies by Stanford’s Tim Goodnough, MD, a hematologist and the director of transfusion services, are years old but only recently being noticed. The takeaway message from all is the same: While blood is precious and continues to save lives, its use can be minimized and fine-tuned to optimize patients’ health and reduce costs.
The American Medical Association brought attention to the subject last fall at its national summit on the overuse of five medical treatments. Blood transfusions were on the list (along with heart stents, ear tubes, antibiotics and inducing birth in pregnant women).
“From the clinical standpoint, I’m not really thinking about resources or cost,” says Maggio, who’s also an assistant professor of surgery. “I’m thinking about giving the patient the best care.” Donated blood carries risks, albeit very slight, of infection and setting off an immune reaction. But research is also showing that even when these drastic outcomes are avoided, there’s something else about donated blood — which scientists don’t fully understand — that could slow recovery time or increase complications.
While autotransfusion for trauma patients is growing, and guidelines for blood transfusions are changing in response to this new research, altering the protocols that doctors have been using for so many years is a slow process.
Changing the routine
At Stanford, it took an innovative new program that used alerts on doctors’ computer systems to enforce fewer blood transfusions
But the push paid off: Blood use in the operating rooms, emergency rooms and hospital wards of both Stanford and the Lucile Packard Children’s Hospital has declined by 10 percent in just a few years. At Packard Children’s alone, 460 transfusions and $165,000 were saved in one year, according to a pilot study conducted Feb. 1, 2009, through Jan. 31, 2010.
‘There’s this idea ingrained in the culture of medicine that people will die if they don’t have a certain level of blood, that blood is the ultimate lifesaver.’
– Patricia Ford, MD, Founder and Director of Pennsylvannia Hospital’s Center for Bloodless Medicine and Surgery at Penn Medicine
“I think we’re probably still giving too much blood in some of these situations,” says Maggio. “But we hope that physicians are becoming better informed about when to give blood.”
People most often need blood transfusions when they’re in one of three situations: They lose blood from a major surgery that’s been scheduled for weeks or months; they lose blood in a way that their body won’t be able to replace, such as a blood cancer that shuts down the body’s ability to make blood cells; or they lose blood during a more sudden trauma — either an external wound or internal bleeding.
“For that first group of patients, scheduled for elective surgery, if you can plan ahead, you should be able to avoid using blood,” says Goodnough, a professor of pathology and of medicine. In those patients, drugs can boost a patient’s own blood production ahead of surgery, blood can be collected from a patient ahead of time to re-infuse later, precautions can be taken to prevent sudden blood loss, or autotransfusion machines like the cell salvage device can be set up. “Where we still need a national blood inventory is for patients who can’t plan ahead,” says Goodnough.
In the cases where physicians continue to give blood when it might not be needed, it’s often because they can’t imagine not doing everything they can to help a patient — and blood has always been viewed as having far more benefits than risks in almost any population of patients. But now, that risk-benefit analysis is changing.
“There’s this idea ingrained in the culture of medicine that people will die if they don’t have a certain level of blood, that blood is the ultimate lifesaver,” says Patricia Ford, MD, founder and director of Pennsylvania Hospital’s Center for Bloodless Medicine and Surgery at Penn Medicine. “And that’s true in some specific situations, but for most patients in most situations it’s just not true.” Ford’s center is one of the oldest and largest in the country that specializes in treating patients without donated blood; dozens of others have been created over the past decades but mostly at a smaller scale.
– Patricia Ford, MD, Founder and Director of Pennsylvannia Hospital’s Center for Bloodless Medicine and Surgery at Penn Medicine
Going bloodless
Every year, Ford treats or operates on around 700 Jehovah’s Witnesses, whose religion prohibits transfusions of blood that is not one’s own. Since 1996, she has been fine-tuning ways to give these patients the best care as well as ways to apply these techniques to the broader population.
“Many physicians I talked to at the beginning had this misperception that a lot of patients just can’t survive without receiving blood,” says Ford. “I may have even thought that myself to some degree. But what I rapidly learned was you can care for these patients by just applying some easy strategies.”
In fact, a study published in August 2012 by researchers at the Cleveland Clinic concluded that Jehovah’s Witness patients recovered better from heart surgery than patients who received blood transfusions. It’s the longest study conducted on such patients — the researchers followed them for up to 20 years. The Jehovah’s Witness patients had higher five-year survival rates, fewer heart attacks following the surgery and fewer complications including sepsis and renal failure. The better outcomes might not have been due to the absence of transfusions but to differences in care received — the patients were more likely to be treated for low blood levels before surgery by receiving iron supplements and vitamins, and every patient’s surgery included use of an intraoperative cell salvage device. The findings suggest that these methods employed for bloodless surgeries could help patients beyond the Jehovah’s Witness community.
At Pennsylvania Hospital, Ford has discovered that, for scheduled surgeries, one of the best ways to avoid the need for blood transfusions is to test patients’ levels of hemoglobin — the protein in red blood cells that carries oxygen — well before their surgery. If the levels are low, then the patient can take vitamin K and iron supplements, which help the body produce more blood cells and help red blood cells more efficiently carry oxygen throughout the body. The practice of testing for low red blood cell levels, or anemia, is now beginning to spread from specialized clinics like Ford’s to other hospitals around the country.
“Testing for anemia was just not on people’s radar screens, because they knew that they could always give the patient blood,” says Ford. Now, many doctors consider testing a patient’s blood cell levels just as important as testing their heart and lung health before surgery. This shift is supported by studies such as an October 2012 analysis in the Annals of Thoracic Surgery of the outcomes of more than 17,000 heart surgeries, which found an increase in stroke, death during surgery and death after surgery when patients were anemic before surgery.
At Stanford, standard pre-surgery tests include blood counts for patients who are expected to lose large amounts of blood, says Goodnough. If anemia is suggested by the results, clinicians aim to manage the condition before surgery.
At Penn, Ford also emphasizes the conservation of blood during surgery, often by using an intraoperative cell salvage device. Patients can also donate blood in the weeks leading up to a scheduled surgery and their own saved blood — called an autologous donation — can be used for a transfusion if necessary. In the 1980s, Goodnough studied the usefulness of autologous donations in different patient population groups and pushed for its broader usage. It’s now considered a mainstream way of reducing the need for donated blood. “It sounds like a mundane concept now, but it was quite progressive when we first started looking at it,” says Goodnough.
Among Ford’s lessons with the Jehovah’s Witnesses, she says that perhaps her most important has been that there’s no magic hemoglobin number that tells doctors when a patient will start exhibiting signs of anemia. Typically, doctors consider hemoglobin above 12 to be normal, and hemoglobin below 7 or 8 to indicate the need for a blood transfusion. But Ford and a growing number of other doctors think those numbers could be pushed down further, a change that would require new studies for many to adapt.
“It’s not unusual for me to see a patient who has a hemoglobin of 5 and they look as healthy as anyone walking down the street,” says Ford. Of course, there also can be patients who become sick with much higher hemoglobin levels, but Ford would like to see more doctors treating blood levels based on symptoms, not a number. Goodnough agrees: “It’s really hard to demonstrate at what level of hemoglobin a transfusion will help a patient,” he says. “And we’re increasingly seeing that for most patients, hemoglobin has to be exceptionally low to have effects.” But it depends more on the patient’s health and risk factors, he says. There’s no one-size-fits-all solution.
Subscribe to:
Posts (Atom)