the bible,truth,God's kingdom,Jehovah God,New World,Jehovah's Witnesses,God's church,Christianity,apologetics,spirituality.
Tuesday, 23 May 2017
Are the fossils on Darwinists' side in the design debate?
"Fossils. Fossils. Fossils." Does Ken Miller Win?
Casey Luskin
Ken Miller was recently quoted in a campus news article saying, "We have the fossils. ... We win." Professor Miller's logical fallacy was pointed out years ago by those who attempted to clarify reasoning in paleontology, systematics, and evolutionary biology, and it led some scientists (like Colin Patterson) to the conclusion that a paleontological pattern may support or falsify an evolutionary hypothesis, but it can never absolutely prove one (i.e. fossils can't make Darwinism positively "win"). As a result, some scientists (e.g., Brower, 2000) proposed a strict separation between paleontology and systematics on the one hand, and evolutionary theory on the other. Unfortunately, this clear-thinking approach has been largely abandoned or ignored by most paleontologists and evolutionary biologists. Those who are ignorant of this fallacy don't realize that pattern observations are independent of process hypotheses. (For instance, just because I know the sun "rises" everyday does not mean my pet theory about its origin must be correct.) Rather than following the approach of authorities like Colin Patterson, Professor Miller seems to draw his amusing talking points on evolution from comedian Lewis Black, whom Jonathan Wells recounts in The Politically Incorrect Guide to Darwinism and Intelligent Design makes the following authoritarian argument for Darwinism: "I don't have to argue [evolution] any more. Fossils. Fossils. Fossils. I win.''
The campus news article stated that Miller "demonstrated the 23 intermediate species that have been discovered as evolutionary stepping stones between land mammals and swimming mammals," and called whales "the poster children for macroevolution." I would have loved to have been there to see Miller "demonstrat[e]" all "23 intermediate species." That sounds a lot more impressive than University of Michigan whale paleontology expert Philip Gingerich's admission that currently the "poster children" merely have "fossils illustrating three or four steps that bridge the precursor of whales to today's mammals." Indeed, Kevin Padian noted that these "poster children" fossils have "distinguishing characteristics, which they would have to lose in order to be considered direct ancestors of other known forms." My suspicion is that Professor Miller didn't delve into too many details, but rather used the fossil name-dropping approach to discussing alleged intermediates between land-mammals and whales. I have described this approach as follows:
[John] Wise and [Pia] Vogel also mention "whale-like tetrapods" and "tetrapod-like whales," name-dropping a long string of fossil names but leaving the reader with little, if any, information about this alleged evolutionary transition. ... So how good an example is this "poster child"?
Philip Gingerich admits that "[w]hales have not been collected on a fine enough time scale to see rapid change. This will be revealed through more fieldwork. So far we have fossils illustrating three or four steps that bridge the precursor of whales to today's mammals." To be fair, there are some fossils in this field with cetacean features, but some of the fossils cited by Wise and Vogel are land mammals that do not explain how whales become aquatic. For example, Wise and Vogel mention Pakicetus, a full-fledged land-mammal [whose only real claim to belonging in the alleged whale series is the fact that it had] ear-bones like a whale. Full-fledged land mammals don't provide much evidence when one is trying to document the evolution of fully-aquatic whales from land-mammals. So Wise and Vogel name-drop Ambulocetus. But this fossil also had strong load-bearing legs with "large hind limbs and enormous feet," a "long, muscular body," and a pelvis "like that of a land mammal" (Gingerich, 2001). These two fossils don't look like a "walking whale" (as they were called in National Geographic). Instead, Wise & Vogel subsequently name-drop Rodhocetus: it probably spent more time in the water than Ambulocetus, and did not swim like a whale, but had large feet and hands. One expert said Rodhocetus probably swam like Ambulocetus: "an otter-like pelvic paddler" or alternatively, that it had "[t]runk and limb proportions" that "are most similar to those of the living, highly aquatic, foot-powered desmans." Of course, desmans are a type of European mole that do just fine walking on land. Are the whales walking yet?But let's acknowledge that theses fossils do have some skeletal characteristics which appear intermediate between the features of land-mammals and whales. Have Darwinian paleontologists made their case? The aforementioned bird evolution expert, Alan Feduccia, observes that "the evolution of whales (the 'poster child' for macroevolution) from terrestrial ungulates is well documented at < 10 million years." Think about that for a moment. Whales, with all of their complex adaptations for aquatic life evolved from a "primitive little mammal" (Steven Stanley, The New Evolutionary Timetable, pg. 93) to a full-fledged whale in less than ten million years. Whales have a long generation time, meaning that there were perhaps only a few million generations at best to allow for the change to add up. If they had a generation time as short as 5 years, Haldane's dilemma predicts that only a few thousand mutations could become fixed into an evolving population during that time period. (See Walter ReMine, The Biotic Message.) [In other words, the fossil record permits dramatically insufficient time to convert a land-mammal into a whale.]Wise and Vogel can name-drop whatever fossils they like, but if the amount of time allowed by the fossil record for this evolutionary transition is too short to accommodate the vast genetic and morphological changes that must have taken place, critical thinkers have good reasons to be skeptical of this evolutionary story. The exceedingly short timescale of the alleged evolution of whales from land mammals is a major problem with this Neo-Darwinian story, but this point is never mentioned by Wise and Vogel as they name-drop their supposed fossil evidence.
Never mind the science,Darwin defenders continue to struggle with basic English.
Correcting Disinformation on Academic Freedom Legislation
David Klinghoffer | @d_klinghoffer
Our colleague Sarah Chaffee, who is Program Officer for Education and Public Policy for the Center for Science & Culture, has an excellent excellent piece up at CNS News, correcting some of the rampant misrepresentations of the content of academic freedom legislation around the country. She charitably calls them misconceptions.
There are several misconceptions that come up year after year in the media about academic freedom bills. This year, with legislation (bills and resolutions) and science standards reviews in Texas, South Dakota, Oklahoma, Indiana, Louisiana and Alabama, was no exception.
The issue centers on what the legislation protects and what it doesn’t. Academic freedom is not about teaching creationism or intelligent design.
First, creationism. Concerns regarding creationism in legislation are unfounded, as the Supreme Court has said that creationism is a religious doctrine, and therefore can’t be taught in public schools. And obviously if science standards included creationism, they would be considered unconstitutional and immediately brought to court. Academic freedom bills that follow our model legislation don’t include creationism. In fact, they have a provision regarding non-promotion of religion or non-religion in case a law happens to come before a confused judge. As a result, laws in Louisiana (2008) and Tennessee (2012) haven’t been challenged in court in the years they have been in place.
Second, teaching of intelligent design is not a concern. K-12 teachers in public schools only have the ability to teach what is in the curriculum. The Constitution does not grant them academic freedom or free speech rights in the performance of their job, and court decisions are consistent with this. Academic freedom legislation is very, very limited legislation that authorizes teachers to discuss the scientific strengths and weaknesses of scientific topics already in the curriculum without having to fear losing their jobs. Teachers cannot bring in a new theory like intelligent design. It is not in the curriculum anywhere in the United States. The bill does not apply. If teachers in Louisiana and Tennessee had been using the laws as covers to teach intelligent design, we would likely have seen students or families complaining, and that picked up in the media. There have not been such reports. And even if there were a rogue teacher who tried to teach intelligent design, they would find the law did not protect them.
I’ve noticed that misrepresentations of this legislation are often accompanied by citations of the Darwin-lobbying group National Center for Science Education (NCSE). In fact, while I haven’t made a formal study of it, my impression is that that is almost always the case.
Last week, for example, we cited an article for Nature that mischaracterized the resolutions in Alabama and Indiana, saying they “would give educators license to treat evolution and intelligent design as equally valid theories.” Which is absolutely not true.
Sure enough, in the very next paragraph, writer Erin Ross quotes Glenn Branch of the NCSE, brandishing their favorite scare word (“The strategies of creationists have gotten more sophisticated”). In a fairly short article, Ross adduces the authority of the NCSE in no fewer than 7 of 14 paragraphs.
You have to hand it to these people. As champions of disinformation, with science and education reporters all but taking dictation, they are pretty impressive.
David Klinghoffer | @d_klinghoffer
Our colleague Sarah Chaffee, who is Program Officer for Education and Public Policy for the Center for Science & Culture, has an excellent excellent piece up at CNS News, correcting some of the rampant misrepresentations of the content of academic freedom legislation around the country. She charitably calls them misconceptions.
There are several misconceptions that come up year after year in the media about academic freedom bills. This year, with legislation (bills and resolutions) and science standards reviews in Texas, South Dakota, Oklahoma, Indiana, Louisiana and Alabama, was no exception.
The issue centers on what the legislation protects and what it doesn’t. Academic freedom is not about teaching creationism or intelligent design.
First, creationism. Concerns regarding creationism in legislation are unfounded, as the Supreme Court has said that creationism is a religious doctrine, and therefore can’t be taught in public schools. And obviously if science standards included creationism, they would be considered unconstitutional and immediately brought to court. Academic freedom bills that follow our model legislation don’t include creationism. In fact, they have a provision regarding non-promotion of religion or non-religion in case a law happens to come before a confused judge. As a result, laws in Louisiana (2008) and Tennessee (2012) haven’t been challenged in court in the years they have been in place.
Second, teaching of intelligent design is not a concern. K-12 teachers in public schools only have the ability to teach what is in the curriculum. The Constitution does not grant them academic freedom or free speech rights in the performance of their job, and court decisions are consistent with this. Academic freedom legislation is very, very limited legislation that authorizes teachers to discuss the scientific strengths and weaknesses of scientific topics already in the curriculum without having to fear losing their jobs. Teachers cannot bring in a new theory like intelligent design. It is not in the curriculum anywhere in the United States. The bill does not apply. If teachers in Louisiana and Tennessee had been using the laws as covers to teach intelligent design, we would likely have seen students or families complaining, and that picked up in the media. There have not been such reports. And even if there were a rogue teacher who tried to teach intelligent design, they would find the law did not protect them.
I’ve noticed that misrepresentations of this legislation are often accompanied by citations of the Darwin-lobbying group National Center for Science Education (NCSE). In fact, while I haven’t made a formal study of it, my impression is that that is almost always the case.
Last week, for example, we cited an article for Nature that mischaracterized the resolutions in Alabama and Indiana, saying they “would give educators license to treat evolution and intelligent design as equally valid theories.” Which is absolutely not true.
Sure enough, in the very next paragraph, writer Erin Ross quotes Glenn Branch of the NCSE, brandishing their favorite scare word (“The strategies of creationists have gotten more sophisticated”). In a fairly short article, Ross adduces the authority of the NCSE in no fewer than 7 of 14 paragraphs.
You have to hand it to these people. As champions of disinformation, with science and education reporters all but taking dictation, they are pretty impressive.
Monday, 22 May 2017
Mammals began there ascent prior to dino extinction say scientists.
Mammals began their takeover long before the death of the dinosaurs
Source:
University of Southampton
New research reports that, contrary to popular belief, mammals began their massive diversification 10 to 20 million years before the extinction of the dinosaurs.
The study, involving Elis Newham from the University of Southampton, questioned the familiar story that dinosaurs dominated their prehistoric environment, while tiny mammals took a backseat, until the dinosaurs (besides birds) went extinct 66 million years ago, allowing mammals to shine.
Elis Newham, PhD student in Engineering and the Environment and co-author of the study, which is published Proceedings of the Royal Society B, said: "The traditional view is that mammals were suppressed during the 'age of the dinosaurs' and underwent a rapid diversification immediately following the extinction of the dinosaurs. However, our findings were that therian mammals, the ancestors of most modern mammals, were already diversifying considerably before the extinction event and the event also had a considerably negative impact on mammal diversity."
The old hypothesis hinged upon the fact that many of the early mammal fossils that had been found were from small, insect-eating animals -- there didn't seem to be much in the way of diversity. However, over the years, more and more early mammals have been found, including some hoofed animal predecessors the size of dogs. The animals' teeth were varied too.
The researchers analysed the molars of hundreds of early mammal specimens in museum fossil collections. They found that the mammals that lived during the years leading up to the dinosaurs' demise had widely varied tooth shapes, meaning that they had widely varied diets. These different diets proved key to an unexpected finding regarding mammal species going extinct along with the dinosaurs.
Not only did mammals begin diversifying earlier than previously expected, but the mass extinction wasn't the perfect opportunity for mammal evolution that it's traditionally been painted as. Early mammals were hit by a selective extinction at the same time the dinosaurs died out -- generalists that could live off of a wide variety of foods seemed more apt to survive, but many mammals with specialised diets went extinct.
The scientists involved with the study were surprised to see that mammals were initially negatively impacted by the mass extinction event. "I fully expected to see more diverse mammals immediately after the extinction," said lead author David Grossnickle, a Field Museum Fellow and PhD candidate at the University of Chicago. "I wasn't expecting to see any sort of drop. It didn't match the traditional view that after the extinction, mammals hit the ground running. It's part of the reason why I went back to study it further -- it seemed wrong."
The reason behind the mammals' pre-extinction diversification remains a mystery. Grossnickle suggests a possible link between the rise of mammals and the rise of flowering plants, which diversified around the same time. "We can't know for sure, but flowering plants might have offered new seeds and fruits for the mammals. And, if the plants co-evolved with new insects to pollinate them, the insects could have also been a food source for early mammals," he said.
Grossnickle notes that the study is particularly relevant in light of the mass extinction the earth is currently undergoing. He said: "The types of survivors that made it across the mass extinction 66 million years ago, mostly generalists, might be indicative of what will survive in the next hundred years, the next thousand."
Story Source:
The above post is reprinted from materials provided by University of Southampton. Note: Materials may be edited for content and length.
Source:
University of Southampton
New research reports that, contrary to popular belief, mammals began their massive diversification 10 to 20 million years before the extinction of the dinosaurs.
The study, involving Elis Newham from the University of Southampton, questioned the familiar story that dinosaurs dominated their prehistoric environment, while tiny mammals took a backseat, until the dinosaurs (besides birds) went extinct 66 million years ago, allowing mammals to shine.
Elis Newham, PhD student in Engineering and the Environment and co-author of the study, which is published Proceedings of the Royal Society B, said: "The traditional view is that mammals were suppressed during the 'age of the dinosaurs' and underwent a rapid diversification immediately following the extinction of the dinosaurs. However, our findings were that therian mammals, the ancestors of most modern mammals, were already diversifying considerably before the extinction event and the event also had a considerably negative impact on mammal diversity."
The old hypothesis hinged upon the fact that many of the early mammal fossils that had been found were from small, insect-eating animals -- there didn't seem to be much in the way of diversity. However, over the years, more and more early mammals have been found, including some hoofed animal predecessors the size of dogs. The animals' teeth were varied too.
The researchers analysed the molars of hundreds of early mammal specimens in museum fossil collections. They found that the mammals that lived during the years leading up to the dinosaurs' demise had widely varied tooth shapes, meaning that they had widely varied diets. These different diets proved key to an unexpected finding regarding mammal species going extinct along with the dinosaurs.
Not only did mammals begin diversifying earlier than previously expected, but the mass extinction wasn't the perfect opportunity for mammal evolution that it's traditionally been painted as. Early mammals were hit by a selective extinction at the same time the dinosaurs died out -- generalists that could live off of a wide variety of foods seemed more apt to survive, but many mammals with specialised diets went extinct.
The scientists involved with the study were surprised to see that mammals were initially negatively impacted by the mass extinction event. "I fully expected to see more diverse mammals immediately after the extinction," said lead author David Grossnickle, a Field Museum Fellow and PhD candidate at the University of Chicago. "I wasn't expecting to see any sort of drop. It didn't match the traditional view that after the extinction, mammals hit the ground running. It's part of the reason why I went back to study it further -- it seemed wrong."
The reason behind the mammals' pre-extinction diversification remains a mystery. Grossnickle suggests a possible link between the rise of mammals and the rise of flowering plants, which diversified around the same time. "We can't know for sure, but flowering plants might have offered new seeds and fruits for the mammals. And, if the plants co-evolved with new insects to pollinate them, the insects could have also been a food source for early mammals," he said.
Grossnickle notes that the study is particularly relevant in light of the mass extinction the earth is currently undergoing. He said: "The types of survivors that made it across the mass extinction 66 million years ago, mostly generalists, might be indicative of what will survive in the next hundred years, the next thousand."
Story Source:
The above post is reprinted from materials provided by University of Southampton. Note: Materials may be edited for content and length.
Why multiverse speculation leaves some cold.
A Cold Spot In Space — “Evidence” of a Multiverse?
David Klinghoffer | @d_klinghoffer
Cosmic fine tuning, with physics and chemistry conspiring to permit the existence of creatures such as ourselves, is one of best-recognized pieces of evidence for intelligent design. To this, the hypothesis of a multiverse is materialism’s only response.
According to this line of reasoning, or imagining, our universe reflects only a lucky roll of the dice. A very, very, very lucky roll, which, however, is just to be expected if reality sports not one but a possibly infinite number of universes. Some universe was bound to get lucky, and it was ours.
It’s the single dreamiest, most unsupported idea in all of science, making Darwinian evolution look like a really solid bet by comparison. What’s wanted is real evidence for the multiverse, any at all, and that seems doomed to go on lacking ad infinitum.
Trumped up evidence is nevertheless a regular feature of popular science journalism. The latest: a headline in The Guardian, “Multiverse: have astronomers found evidence of parallel universes?” Adding the question mark is prudent, since the answer, to be truthful, is No.
Author Stuart Clark got hold of a press release from the Royal Astronomical Society, which he wheels out after an introduction heavy with jokey references to Brexit, Trump, the alt-right, and cat videos.
It sounds bonkers but the latest piece of evidence that could favour a multiverse comes from the UK’s Royal Astronomical Society. They recently published a study on the so-called ‘cold spot’. This is a particularly cool patch of space seen in the radiation produced by the formation of the Universe more than 13 billion years ago.
The cold spot was first glimpsed by NASA’s WMAP satellite in 2004, and then confirmed by ESA’s Planck mission in 2013. It is supremely puzzling. Most astronomers and cosmologists believe that it is highly unlikely to have been produced by the birth of the universe as it is mathematically difficult for the leading theory — which is called inflation — to explain.
This latest study claims to rule out a last-ditch prosaic explanation: that the cold spot is an optical illusion produced by a lack of intervening galaxies.
One of the study’s authors, Professor Tom Shanks of Durham University, told the RAS, “We can’t entirely rule out that the Spot is caused by an unlikely fluctuation explained by the standard [theory of the Big Bang]. But if that isn’t the answer, then there are more exotic explanations. Perhaps the most exciting of these is that the Cold Spot was caused by a collision between our universe and another bubble universe. If further, more detailed, analysis … proves this to be the case then the Cold Spot might be taken as the first evidence for the multiverse.” [Emphasis added.]
Count the instances of speculative language in those last four sentences. “Can’t entirely rule out…If that isn’t the answers…Perhaps…If further, more detailed, analysis…proves…[M]ight be taken as the first evidence…”
It’s “Heady stuff,” Clark exclaims. That’s one way of putting it. The paper in question, though, says just this (“Evidence against a supervoid causing the CMB Cold Spot”):
If not explained by a ΛCDM ISW effect the Cold Spot could have more exotic primordial origins. If it is a non-Gaussian feature, then explanations would then include either the presence in the early universe of topological defects such as textures (Cruz et al. 2007) or inhomogeneous re-heating associated with non-standard inflation (Bueno Sa ́nchez 2014). Another explanation could be that the Cold Spot is the remnant of a collision between our Universe and another ‘bubble’ universe during an early inflationary phase (Chang et al. 2009, Larjo & Levi 2010). It must be borne in mind that even without a supervoid the Cold Spot may still be caused by an unlikely statistical fluctuation in the standard (Gaussian) ΛCDM cosmology.
In this way, based ultimately on a couple of parenthetically referenced papers from 2009 and 2010, a “cold spot” in space answers one of the ultimate questions that have ever puzzled human beings, tipping the scales toward a universe, or multiverse, without design or purpose. As of the present moment, in the quest to explain away ultra-fine tuning, this is the best kind of stuff that materialism has got to offer.
It’s all the most absurd axe-grinding: building your case against a person or idea you don’t like (intelligent design, in this case) by gathering rumors, dreams, and guesses, disregarding common sense and objective evidence, since the conclusion you wish to reach, that you are bound to reach, is already pre-set.
So materialism goes on its merry way, largely unchallenged, with the media as its bullhorn. If scientists advocating the theory of intelligent design ever went before the public with conjectures as weak as this, they would be flayed alive.
David Klinghoffer | @d_klinghoffer
Cosmic fine tuning, with physics and chemistry conspiring to permit the existence of creatures such as ourselves, is one of best-recognized pieces of evidence for intelligent design. To this, the hypothesis of a multiverse is materialism’s only response.
According to this line of reasoning, or imagining, our universe reflects only a lucky roll of the dice. A very, very, very lucky roll, which, however, is just to be expected if reality sports not one but a possibly infinite number of universes. Some universe was bound to get lucky, and it was ours.
It’s the single dreamiest, most unsupported idea in all of science, making Darwinian evolution look like a really solid bet by comparison. What’s wanted is real evidence for the multiverse, any at all, and that seems doomed to go on lacking ad infinitum.
Trumped up evidence is nevertheless a regular feature of popular science journalism. The latest: a headline in The Guardian, “Multiverse: have astronomers found evidence of parallel universes?” Adding the question mark is prudent, since the answer, to be truthful, is No.
Author Stuart Clark got hold of a press release from the Royal Astronomical Society, which he wheels out after an introduction heavy with jokey references to Brexit, Trump, the alt-right, and cat videos.
It sounds bonkers but the latest piece of evidence that could favour a multiverse comes from the UK’s Royal Astronomical Society. They recently published a study on the so-called ‘cold spot’. This is a particularly cool patch of space seen in the radiation produced by the formation of the Universe more than 13 billion years ago.
The cold spot was first glimpsed by NASA’s WMAP satellite in 2004, and then confirmed by ESA’s Planck mission in 2013. It is supremely puzzling. Most astronomers and cosmologists believe that it is highly unlikely to have been produced by the birth of the universe as it is mathematically difficult for the leading theory — which is called inflation — to explain.
This latest study claims to rule out a last-ditch prosaic explanation: that the cold spot is an optical illusion produced by a lack of intervening galaxies.
One of the study’s authors, Professor Tom Shanks of Durham University, told the RAS, “We can’t entirely rule out that the Spot is caused by an unlikely fluctuation explained by the standard [theory of the Big Bang]. But if that isn’t the answer, then there are more exotic explanations. Perhaps the most exciting of these is that the Cold Spot was caused by a collision between our universe and another bubble universe. If further, more detailed, analysis … proves this to be the case then the Cold Spot might be taken as the first evidence for the multiverse.” [Emphasis added.]
Count the instances of speculative language in those last four sentences. “Can’t entirely rule out…If that isn’t the answers…Perhaps…If further, more detailed, analysis…proves…[M]ight be taken as the first evidence…”
It’s “Heady stuff,” Clark exclaims. That’s one way of putting it. The paper in question, though, says just this (“Evidence against a supervoid causing the CMB Cold Spot”):
If not explained by a ΛCDM ISW effect the Cold Spot could have more exotic primordial origins. If it is a non-Gaussian feature, then explanations would then include either the presence in the early universe of topological defects such as textures (Cruz et al. 2007) or inhomogeneous re-heating associated with non-standard inflation (Bueno Sa ́nchez 2014). Another explanation could be that the Cold Spot is the remnant of a collision between our Universe and another ‘bubble’ universe during an early inflationary phase (Chang et al. 2009, Larjo & Levi 2010). It must be borne in mind that even without a supervoid the Cold Spot may still be caused by an unlikely statistical fluctuation in the standard (Gaussian) ΛCDM cosmology.
In this way, based ultimately on a couple of parenthetically referenced papers from 2009 and 2010, a “cold spot” in space answers one of the ultimate questions that have ever puzzled human beings, tipping the scales toward a universe, or multiverse, without design or purpose. As of the present moment, in the quest to explain away ultra-fine tuning, this is the best kind of stuff that materialism has got to offer.
It’s all the most absurd axe-grinding: building your case against a person or idea you don’t like (intelligent design, in this case) by gathering rumors, dreams, and guesses, disregarding common sense and objective evidence, since the conclusion you wish to reach, that you are bound to reach, is already pre-set.
So materialism goes on its merry way, largely unchallenged, with the media as its bullhorn. If scientists advocating the theory of intelligent design ever went before the public with conjectures as weak as this, they would be flayed alive.
New theory or old theory 2.0?
A “Nachos and Ice Cream” Theory of Evolution
David Klinghoffer | @d_klinghoffer
If the old theory of evolution was so great, why do they keep rolling out new ones? You notice, however, that the “new,” “extended,” “fundamentally revised” theories – with the exception of the theory of intelligent design – always turn out to be more or less repackaged versions of the same old, same old. Without recourse to mind, they fail again and again to solve the main problem.
Case in point: Sarah Zhang in The Atlantic heralds, “A Grand New Theory of Life’s Evolution on Earth.” At long last, is this the “theory of the generative” we’ve been waiting for?
No. The “new theory” from Olivia Judson of Imperial College London is a neat way of classifying sweeping time frames, “energetic epochs,” where life had energy sources made freshly available, thus making increasingly complex life possible.
The modern world gives us such ready access to nachos and ice cream that it’s easy to forget: Humans bodies require a ridiculous and — for most of Earth’s history — improbable amount of energy to stay alive.
Consider a human dropped into primordial soup 3.8 billion years ago, when life first began. They would have nothing to eat. Earth then had no plants, no animals, no oxygen even. Good luck scrounging up 1600 calories a day drinking pond- or sea water. So how did we get sources of concentrated energy (i.e. food) growing on trees and lumbering through grass? How did we end up with a planet that can support billions of energy-hungry, big-brained, warm-blooded, upright-walking humans?
In“The Energy Expansions of Evolution,” an extraordinary new essay in Nature Ecology and Evolution, Olivia Judson sets out a theory of successive energy revolutions that purports to explain how our planet came to have such a diversity of environments that support such a rich array of life, from the cyanobacteria to daisies to humans.
Judson divides the history of the life on Earth into five energetic epochs, a novel schema that you will not find in geology or biology textbooks. In order, the energetic epochs are: geochemical energy, sunlight, oxygen, flesh, and fire. Each epoch represents the unlocking of a new source of energy, coinciding with new organisms able to exploit that source and alter their planet. The previous sources of energy stay around, so environments and life on Earth become ever more diverse. Judson calls it a “step-wise construction of a life-planet system.” [Emphasis added.]
The key word in that passage may be “coincide.” Energy – delivered in the form of “nachos and ice cream,” or whatever the case might be — is necessary but not sufficient in explaining how complex life arises. Merely “coinciding” with great leaps forward in biological complexity doesn’t cut it. The really grand mystery remains the origin of biological information. See our short video, “The Information Enigma.” Positing “energetic epochs” does nothing to resolve that enigma.
She mentions oxygen. In the context of explaining the Cambrian explosion, a classic fallacy is the “oxygen theory,” holding that new body plans arose thanks newly available oxygen. As we’ve noted many times before, oxygen has no ability to compose coded information, generating the software on which life runs.
The point about fire is interesting, and should ring a bell. Zhang summarizes:
Then one particular type of animal — those of the genus Homo — figure out fire. Fire lets us cook, which may have allowed us to get more nutrition out of the same food. It lets us forge labor-saving metal tools. It lets us create fertilizer through the Haber-Bosch process to grow food on industrial scales. It lets us burn fossil fuels for energy.
True enough. But this brief treatment falls well short of Michael Denton’s research and writing on the same subject. Fire does more than harness fuel to provide energy. It reveals how nature has been specially fitted for a creature like man, and vice versa. See, “Fire-Maker: How Humans Were Designed to Harness Fire & Transform Our Planet.”
David Klinghoffer | @d_klinghoffer
If the old theory of evolution was so great, why do they keep rolling out new ones? You notice, however, that the “new,” “extended,” “fundamentally revised” theories – with the exception of the theory of intelligent design – always turn out to be more or less repackaged versions of the same old, same old. Without recourse to mind, they fail again and again to solve the main problem.
Case in point: Sarah Zhang in The Atlantic heralds, “A Grand New Theory of Life’s Evolution on Earth.” At long last, is this the “theory of the generative” we’ve been waiting for?
No. The “new theory” from Olivia Judson of Imperial College London is a neat way of classifying sweeping time frames, “energetic epochs,” where life had energy sources made freshly available, thus making increasingly complex life possible.
The modern world gives us such ready access to nachos and ice cream that it’s easy to forget: Humans bodies require a ridiculous and — for most of Earth’s history — improbable amount of energy to stay alive.
Consider a human dropped into primordial soup 3.8 billion years ago, when life first began. They would have nothing to eat. Earth then had no plants, no animals, no oxygen even. Good luck scrounging up 1600 calories a day drinking pond- or sea water. So how did we get sources of concentrated energy (i.e. food) growing on trees and lumbering through grass? How did we end up with a planet that can support billions of energy-hungry, big-brained, warm-blooded, upright-walking humans?
In“The Energy Expansions of Evolution,” an extraordinary new essay in Nature Ecology and Evolution, Olivia Judson sets out a theory of successive energy revolutions that purports to explain how our planet came to have such a diversity of environments that support such a rich array of life, from the cyanobacteria to daisies to humans.
Judson divides the history of the life on Earth into five energetic epochs, a novel schema that you will not find in geology or biology textbooks. In order, the energetic epochs are: geochemical energy, sunlight, oxygen, flesh, and fire. Each epoch represents the unlocking of a new source of energy, coinciding with new organisms able to exploit that source and alter their planet. The previous sources of energy stay around, so environments and life on Earth become ever more diverse. Judson calls it a “step-wise construction of a life-planet system.” [Emphasis added.]
The key word in that passage may be “coincide.” Energy – delivered in the form of “nachos and ice cream,” or whatever the case might be — is necessary but not sufficient in explaining how complex life arises. Merely “coinciding” with great leaps forward in biological complexity doesn’t cut it. The really grand mystery remains the origin of biological information. See our short video, “The Information Enigma.” Positing “energetic epochs” does nothing to resolve that enigma.
She mentions oxygen. In the context of explaining the Cambrian explosion, a classic fallacy is the “oxygen theory,” holding that new body plans arose thanks newly available oxygen. As we’ve noted many times before, oxygen has no ability to compose coded information, generating the software on which life runs.
The point about fire is interesting, and should ring a bell. Zhang summarizes:
Then one particular type of animal — those of the genus Homo — figure out fire. Fire lets us cook, which may have allowed us to get more nutrition out of the same food. It lets us forge labor-saving metal tools. It lets us create fertilizer through the Haber-Bosch process to grow food on industrial scales. It lets us burn fossil fuels for energy.
True enough. But this brief treatment falls well short of Michael Denton’s research and writing on the same subject. Fire does more than harness fuel to provide energy. It reveals how nature has been specially fitted for a creature like man, and vice versa. See, “Fire-Maker: How Humans Were Designed to Harness Fire & Transform Our Planet.”
Sunday, 21 May 2017
Ladybug v. Darwin.
Ladybug, Living Origami, Lends a Hand with Umbrella and Other Designs
David Klinghoffer | @d_klinghoffer
Delicate and delightful, ladybug beetles are the insect everyone loves. Having one unexpectedly land on your hand is a reminder of how gentle and beautiful nature can be.
Their ability to alternate nimbly between walking and flying is also a marvel of design. Japanese scientists have been working on clarifying the secret of how they fold and unfold their wings, an effortless gesture of living origami. They published their findings in PNAS.
From USA Today:
Japanese scientists were curious to learn how ladybugs folded their wings inside their shells, so they surgically removed several ladybugs’ outer shells (technically called elytra) and replaced them with glued-on, artificial clear silicone shells to peer at the wings’ underlying folding mechanism.
Why bother with such seemingly frivolous research? It turns out that how the bugs naturally fold their wings can provide design hints for a wide range of practical uses for humans. This includes satellite antennas, microscopic medical instruments, and even everyday items like umbrellas and fans.
“The ladybugs’ technique for achieving complex folding is quite fascinating and novel, particularly for researchers in the fields of robotics, mechanics, aerospace and mechanical engineering,” said lead author Kazuya Saito of the University of Tokyo. [Emphasis added.]
That is astonishingly wide array of “design hints” from the humble bug, which are also called ladybirds. See the design in action:The Telegraph echoes:
Ladybird wings could help change design of umbrellas for first time in 1,000 years
The New York Times:
Ladybugs Pack Wings and Engineering Secrets in Tidy Origami Packages
[…]
To the naked eye, this elegant transformation is a mystery. But scientists in Japan created a window into the process in a study published Monday in Proceedings of the National Academy of Sciences. Just how the ladybug manages to cram these rigid structures into tiny spaces is a valuable lesson for engineers designing deployable structures like umbrellas and satellites.
A ladybug’s hind wings are sturdy enough to keep it in the air for up to two hours and enable it to reach speeds up to 37 miles an hour and altitudes as high as three vertically stacked Empire State Buildings. Yet they fold away with ease. These seemingly contradictory attributes perplexed Kazuya Saito, an aerospace engineer at the University of Tokyo and the lead author of the study.
Working on creating deployable structures like large sails and solar power systems for spacecrafts, he turned to the ladybug for design inspiration.
Notice how, in discussing them, it’s as if we are forced to use the language of design. Regarding ladybugs and their “engineering secrets,” as the NY Times candidly puts it, molecular biologist Douglas Axe tweets:“Like a DeLorean, only cooler!” Here is a DeLorean:In his book Undeniable: How Biology Confirms Our Intuition That Life Is Designed, Dr. Axe uses the illustration of an origami crane. With good reason behind this universal intuition, our minds rebel at the idea that any origami creation could arise through a combination of chance and law, without purpose or design. Yet Darwinian theory demands that we believe a real crane arose that way, or a real ladybug.
David Klinghoffer | @d_klinghoffer
Delicate and delightful, ladybug beetles are the insect everyone loves. Having one unexpectedly land on your hand is a reminder of how gentle and beautiful nature can be.
Their ability to alternate nimbly between walking and flying is also a marvel of design. Japanese scientists have been working on clarifying the secret of how they fold and unfold their wings, an effortless gesture of living origami. They published their findings in PNAS.
From USA Today:
Japanese scientists were curious to learn how ladybugs folded their wings inside their shells, so they surgically removed several ladybugs’ outer shells (technically called elytra) and replaced them with glued-on, artificial clear silicone shells to peer at the wings’ underlying folding mechanism.
Why bother with such seemingly frivolous research? It turns out that how the bugs naturally fold their wings can provide design hints for a wide range of practical uses for humans. This includes satellite antennas, microscopic medical instruments, and even everyday items like umbrellas and fans.
“The ladybugs’ technique for achieving complex folding is quite fascinating and novel, particularly for researchers in the fields of robotics, mechanics, aerospace and mechanical engineering,” said lead author Kazuya Saito of the University of Tokyo. [Emphasis added.]
That is astonishingly wide array of “design hints” from the humble bug, which are also called ladybirds. See the design in action:The Telegraph echoes:
Ladybird wings could help change design of umbrellas for first time in 1,000 years
The New York Times:
Ladybugs Pack Wings and Engineering Secrets in Tidy Origami Packages
[…]
To the naked eye, this elegant transformation is a mystery. But scientists in Japan created a window into the process in a study published Monday in Proceedings of the National Academy of Sciences. Just how the ladybug manages to cram these rigid structures into tiny spaces is a valuable lesson for engineers designing deployable structures like umbrellas and satellites.
A ladybug’s hind wings are sturdy enough to keep it in the air for up to two hours and enable it to reach speeds up to 37 miles an hour and altitudes as high as three vertically stacked Empire State Buildings. Yet they fold away with ease. These seemingly contradictory attributes perplexed Kazuya Saito, an aerospace engineer at the University of Tokyo and the lead author of the study.
Working on creating deployable structures like large sails and solar power systems for spacecrafts, he turned to the ladybug for design inspiration.
Notice how, in discussing them, it’s as if we are forced to use the language of design. Regarding ladybugs and their “engineering secrets,” as the NY Times candidly puts it, molecular biologist Douglas Axe tweets:“Like a DeLorean, only cooler!” Here is a DeLorean:In his book Undeniable: How Biology Confirms Our Intuition That Life Is Designed, Dr. Axe uses the illustration of an origami crane. With good reason behind this universal intuition, our minds rebel at the idea that any origami creation could arise through a combination of chance and law, without purpose or design. Yet Darwinian theory demands that we believe a real crane arose that way, or a real ladybug.
The fall of Rome.:The reboot
How to Protect Medical Conscience
Wesley J. Smith
Wesley J. Smith
Over at First Things, I have a piece up about the ongoing and accelerating campaign — most recently furthered by Ezekiel Emanuel — to drive pro-life and orthodox religious believers out of medicine by forcing their participation or complicity in acts in the medical sphere with which they have strong moral or religious objections.
There are currently some conscience protections in the law, but as the piece notes, they are under assault here and are already collapsing in other countries. From“Pro-Lifers Get Out of Medicine”:
The government of Ontario, Canada is on the verge of requiring doctors either to euthanize or to refer all legally qualified patients. In Victoria, Australia, all physicians must either perform an abortion when asked or find an abortionist for the patient.
One doctor has been disciplined under the law for refusing to refer for a sex-selective abortion. In Washington, a small pharmacy chain owned by a Christian family failed in its attempt to be excused from a regulation requiring all legal prescriptions to be dispensed, with a specific provision precluding conscience exemptions. The chain now faces a requirement to fill prescriptions for the morning-after pill, against the owners’ religious beliefs.
In Vermont, a regulation obligates all doctors to discuss assisted suicide with their terminally ill patients as an end-of-life option, even if they are morally opposed. Litigation to stay this forced speech has, so far, been unavailing.
The ACLU recently commenced a campaign of litigation against Catholic hospitals that adhere to the Church’s moral teaching.
Here, I would like to share some ideas about how to shore up existing protections to best protect medical professionals from being forced into committing what they consider sinful or immoral acts. I suggest that the following general principles apply in crafting such protections:
Conscience protections should be legally binding.
The rights of conscience should apply to medical facilities such as hospitals and nursing homes as well as to individuals.
Except in the very rare and compelling circumstance in which a patient’s life is at stake, no medical professional should be compelled to perform or participate in procedures or treatments that take human life.
The rights of conscience should apply most strongly in elective procedures, that is, medical treatments not required to extend the life of, or prevent serious harm to, the patient.
It should be the procedure that is objectionable, not the patient. In this way, for example, physicians could not refuse to treat a lung-cancer patient because the patient smoked or to maintain the life of a patient in a vegetative state because the physician believed that people with profound impairments do not have a life worth living.
No medical professional should ever be forced to participate in a medical procedure intended primarily to facilitate the patient’s lifestyle preferences or desires (in contrast to maintaining life or treating a disease or injury).
To avoid conflicts and respect patient autonomy, patients should be advised, whenever feasible, in advance of a professional’s or facility’s conscientious objection to performing or participating in legal medical procedures or treatments.
The rights of conscience should be limited to bona fide medical facilities such as hospitals, skilled nursing centers, and hospices and to licensed medical professionals such as physicians, nurses, and pharmacists.
I am interested in other ideas on this subject, which I predict will become a firestorm issue in coming years.
Saturday, 20 May 2017
On our latter day frankensteins and the end of science.
Swarm" Science: Why the Myth of Artificial Intelligence Threatens Scientific Discovery
Erik J. Larson
In the last year, two major well-funded efforts have launched in Europe and in the U.S. aimed at understanding the human brain using powerful and novel computational methods: advanced supercomputing platforms, analyzing peta- and even exabyte datasets, using machine learning methods like convolutional neural networks (CNNs), or "Deep Learning."
At the Swiss Federal Institute of Technology in Lausanne (EPFL), for instance, the Human Brain Project is now underway, a ten-year effort funded by the European Commission to construct a complete computer simulation of the human brain. In the U.S., the Obama Administration has provided an initial $100 million in funding for the newly launched Brain Research Through Advanced Neurotechnologies (BRAIN) Initiative, with funding projected to reach $3 billion in the next ten years. Both projects are billed as major leaps forward in our quest to gain a deeper understanding of the brain -- one of the last frontiers of scientific discovery.
Predictably, given today's intellectual climate, both projects are premised on major confusions and fictions about the role of science and the powers of technology.
The myth of evolving Artificial Intelligence, for one, lies at the center of these confusions. While the U.S. BRAIN Initiative is committed more to the development of measurement technologies aimed at mapping the so-called human connectome -- the wiring diagram of the brain viewed as an intricate network of neurons and neuron circuits -- the Human Brain Project more explicitly seeks to engineer an actual, working simulation of a human brain.
The AI myth drives the HBP vision explicitly, then, even as ideas about Artificial Intelligence and the powers of data-driven methods (aka "Big Data") undergird both projects. The issues raised today in neuroscience are large, significant, and profoundly troubling for science. In what follows, I'll discuss Artificial Intelligence and its role in science today, focusing on how it plays out so unfortunately in neuroscience, and in particular in the high-visibility Human Brain Project in Switzerland.
AI and Science
AI is the idea that computers are becoming intelligent in the same sense as humans, and eventually to even a greater degree. The idea is typically cast by AI enthusiasts and technologists as forward-thinking and visionary, but in fact it has profoundly negative affects on certain very central and important features of our culture and intellectual climate. Its eventual effects are to distract us from using our own minds.
The connection here is obvious, once you see it. If we believe that the burden of human thinking (and here I mean, particularly, explaining the world around us) will be lessened because machines are rapidly gaining intelligence, the consequence to science if this view is fictitious can only be to diminish and ultimately to imperil it.
At the very least, we should expect scientific discovery not to accelerate, but to remain in a confused and stagnant state with this set of ideas. These ideas dominate today.
Look at the history of science. Scientists have grand visions and believe they can explain the world by contact of the rational mind with nature. One thinks of Einstein, but many others as well: Copernicus, Galileo, Newton, Maxwell, Hamilton, Heisenberg, even Watson and Crick.
Copernicus, for instance, became convinced that the entire Ptolemaic model of the solar system stemmed from false theory. His heliocentric model is a case study in the triumph of the human mind not to analyze data but effectively to ignore it -- seeking a more fundamental explanation of observation in a rational vision that is not data-driven but prior to and more fundamental than what we collect and view (the "data"). Were computers around back then, one feels that Copernicus would have ignored their results too, so long as they were directed at analyzing geocentric models. Scientific insight here is key, yesterday and today.
Yet the current worldview is committed, incessantly and obsessively, to reducing scientific insight to "swarms" of scientists working on problems, by each making little contributions to a framework that is already in place. The Human Brain Project here is paradigmatic: the "swarm" language is directly from a key HBP contributor Sean Hill (in the recent compilation The Future of the Brain edited by Gary Marcus, whom I like).
The swarm metaphor evokes thoughts of insects buzzing around, fulfilling pre-ordained roles. So if we're convinced that in a Human-Technology System the "technology" is actually becoming humanly intelligent (the AI myth), the set of social and cultural beliefs begin to change to accommodate a technology-centered worldview. This, however, provides very little impetus for discovery.
To the extent that individual minds aren't central to the technology-driven model of science, then "progress" based on "swarm science" further reinforces the belief that computers are increasingly responsible for advances. It's a self-fulfilling vision; the only problem is that fundamental insights, not being the focus anyway, are also the casualties of this view. If we're living in a geocentric universe with respect to, say, neuroscience still, the model of "swarm" science and data-driven analysis from AI algorithms isn't going to correct us. That's up to us: in the history of science, today, and in our future.
An example. Neuroscientists are collecting massive datasets from neural imaging technologies (not itself a bad thing), believing that machine-learning algorithms will find interesting patterns in the data. When the problem is well defined, this makes sense.
But reading the literature, it's clear that the more starry-eyed among the neuroscientists (like Human Brain Project director Henry Markram) also think that such an approach will obviate the need for individual theory in favor of a model where explanation "emerges" from a deluge of data.
This is not a good idea. For one thing, science doesn't work that way. The "swarm-and-emerge" model of science would seem ridiculous were it not for the belief that such massive quantities of data run on such powerful computing resources ("massive" and "powerful" is part of the emotional component of this worldview) could somehow replace traditional means of discovery, where scientists propose hypotheses and design specific experiments to generate particular datasets to test those hypotheses.
Now, computation is supposed to replace all that human-centered scientific exploration -- Markram himself has said publicly that the thousands of individual experiments are too much for humans to understand. It may be true that the volume of research is daunting, but the antidote can hardly be to force thousands of labs to input data into a set of APIs that presuppose a certain, particular theory of the brain! (This is essentially what the Human Brain Project does.) We don't have the necessary insights, yet, in the first place.
Even more pernicious, the belief that technology is "evolving" and getting closer and closer to human intelligence gives less and less an impetus to people to fight for traditional scientific practice, centered on discovery. If human thought is not the focus anymore, why empower all those individual thinkers? Let them "swarm," instead, around a problem that has been pre-defined.
This too is an example of how the AI myth also encourages a kind of non-egalitarian view of things, where a few people are actually telling everyone else what to do, even as the model is supposed to be communitarian in spirit. This gets us a little too far off topic presently, but is a fascinating case study in how false narratives are self-serving in subtle ways.
Back to science. In fact the single best worldview for scientific discovery is simple: human minds explain data with theory. Now, but only after we have this belief, we can and should insert: and our technology can help us. Computation is a tool -- a very powerful one, but as it isn't becoming intelligent in the sense of providing theory for us, we can't then jettison our model of science, and begin downplaying or disregarding the theoretical insights that scientists (with minds) provide.
This is a terrible idea. It's just terrible. It's no wonder that any major scientific successes in the last decade have been largely engineering-based, like the Human Genome Project. No one has the patience, or even the faith, to fund smaller-scale and more discovery-based efforts.
The idea, once again, is that the computational resources will somehow replace traditional scientific practice, or "revolutionize it" -- but as I've been at pains to argue, computation isn't "smart" in the way people are, and so the entire AI Myth is not positive, or even neutral, but positively threatening to real progress.
The End of Theory? Maybe So
Hence when Chris Andersen wrote in 2007 that Big Data and super computing (and machine learning or i.e., induction) meant the "End of Theory," he echoed the popular Silicon Valley worldview that machines are evolving a human -- and eventually a superhuman -- intelligence, and he simultaneously imperiled scientific discovery. Why? Because (a) machines aren't gaining abductive inference powers, and so aren't getting smart in the relevant manner to underwrite "end of theory" arguments, and (b) ignoring the necessity of scientists to use their minds to understand and explain "data" is essentially gutting the central driving force of scientific change.
To put this yet again on more practical footing, over five hundred neuroscientists petitioned the EU last year because a huge portion of funding for fundamental neuroscience research (over a billion euro) went to the Human Brain Project, which is an engineering effort that presupposes that fundamental pieces of theory about the brain are in place. The only way a reasonable person could believe that, is if he were convinced that the Big Data/AI model would yield those theoretical fruits somehow along the way. When pressed, however, the silence as to how exactly that will happen is deafening.
The answer Markram and others want to provide -- if only sci-fi arguments worked on EU officials or practicing neuroscientists -- is that the computers will keep getting "smarter." And so that myth is really at the root of a lot of current confusion. Make no mistake, the dream of AI is one thing, but the belief that AI is around the corner and inevitable is just a fiction, and potentially a harmful one at that.
To chance and necessity be the glory?
Moths Defy the Possible
Evolution News & Views
How do you make choices in a data-poor environment? Imagine being in a dark room in total silence. Every few seconds, a tiny flash of light appears. You might keep your eyes open as long as possible to avoid missing any of them. You might watch the flashes over time to see if there's a pattern. If you see a pattern, you might deduce it will lead to further information.
The ability to navigate this way in a dim world is called a summation strategy. "This slowing visual response is consistent with temporal summation, a visual strategy whereby the visual integration time (or 'shutter time') is lengthened to increase visual reliability in dim light," Eric Warrant explains in Science. He's discussing how hawkmoths perform "Visual tracking in the dead of night," and he's clearly impressed by how amazingly well insects "defy the possible" as they move through the world:
Nocturnal insects live in a dim world. They have brains smaller than a grain of rice, and eyes that are even smaller. Yet, they have remarkable visual abilities, many of which seem to defy what is physically possible. On page 1245 of this issue, Sponberget al. reveal how one species, the hawkmoth Manduca sexta, isable to accurately track wind-tossed flowers in near darknessand remain stationary while hovering and feeding. [Emphasis added.]
The hawkmoth has some peers on the Olympic award platform:
Examples of remarkable visual abilities include the nocturnal central American sweat bee Megalopta genalis, which can use learned visual landmarks to navigate from its nest -- aninconspicuous hollowed-out twig hanging in the tangled undergrowth -- through a dark and complex rainforest to a distant source of nocturnal flowers, and then return. The nocturnalAustralian bull ant Myrmecia pyriformis manages similar navigational feats on foot. Nocturnal South African dung beetles can use the dim celestial pattern of polarized light around the moon or the bright band of light in the Milky Way as a visual compass to trace out a beeline when rolling dung balls. Some nocturnal insects, like the elephant hawkmoth Deilephila elpenor, even have trichromatic color vision.
These insects pack a lot of computing power in brains the size of a grain of rice. How do they do it? Part of the answer lies in the fine-tuning between object and sensor:
It turns out that even though the hawkmoths must compromise tracking accuracy to meet the demands of visual motiondetection in dim light, the tracking error remains small exactly over the range of frequencies with which wind-tossed flowers move in the wild. The results reveal a remarkable match between the sensorimotor performance of an animal and the dynamics of the sensory stimulus that it most needs to detect.
A tiny brain imposes real-world constraints on processing speed. The hawkmoth, so equipped, faces limits on sensorimotor performance: how sensitive its eyes are in dim light, how quickly it can perceive motion in the flower, and how fast it can move its muscles to stay in sync. The moth inserts its proboscis into the flower, and if a breeze moves the flower about, the moth has to be able to keep up with it to get its food. To meet the challenge, its brain software includes the "remarkable" ability to perform data summation and path integration fast enough to move with the flower while it feeds.
In their experiments, Sponberg et al. observed hawkmoths in a specially designed chamber. They were able to control light levels and move artificial flowers containing a sugar solution at different speeds. "During experiments, this flower was attached to a motorized arm that moved the flower from side to side in a complex trajectory," Warrant says.
The component movement frequencies of this trajectory varied over two orders of magnitude and encompassed the narrower range of frequencies typical of wind-tossed flowers. A hovering moth fed from the flower by extending its proboscis into the reservoir, rapidly flying from side to side to maintain feeding bystabilizing the moving flower in the center of its visual field.
The experiment allowed the researchers to cross the line from possible to impossible, showing at what point the moth could not keep up. Dimmer light requires longer integration time, while faster motion requires quicker muscle response. Still, these little flyers "tracked the flower remarkably well" by using the temporal summation strategy.
Hummingbirds feed on moving flowers, too, but usually in broad daylight. To find this ability to track a moving food source in a tinier creature possessing a much smaller brain is truly amazing -- especially considering that it has less light to see by.
This strategy has recently been demonstrated in bumblebeesflying in dim light and has been predicted for nocturnalhawkmoths. Although temporal summation sacrifices the perception of faster objects, it strengthens the perception of slower ones, like the slower movement frequencies (below ~2 Hz) of the robotic flower.
... and that just happens to be the maximum speed of the natural flowers in the moth's environment. How did this perfect match arise? Why, natural selection, of course. Here comes the narrative gloss:
By carefully analyzing the movements of several species of flowers tossed by natural winds -- including those favored by hawkmoths -- Sponberg et al. discovered that their movements were confined to frequencies below ~2 Hz. Thus, despite visual limitations in dim light, the flight dynamics and visual summation strategies of hovering hawkmoths have evolved to perfectly match the movement characteristics of flowers, their only food source.The implications of the study go far beyond this particular species. It shows how in small animals like hawkmoths, withlimited nervous system capacities and stretched energy budgets, the forces of natural selection have matched sensory and motorprocessing to the most pressing ecological tasks that animals must perform in order to survive. This is done not by maximizing performance in every possible aspect of behavior, but bystripping away everything but the absolutely necessary andhoning what remains to perform tasks as accurately and efficiently as possible.
Do the experimenters agree with this narrative? They actually have little to say about evolution. Near the end of the paper, they speculate a little:
The frequencies with which a moth can maneuver could provide a selective pressure on the biomechanics of flowers to avoidproducing floral movements faster than those that the moth can track in low light. The converse interaction -- flower motions selecting on the moth -- could also be important, suggesting acoevolutionary relationship between pollinator and plant thatextends beyond color, odor, and spatial features to include motion dynamics.
The evolutionary narrative, though, is unsatisfying. It is only conceptual, not empirical (nobody saw the flower and moth co-evolve). Additionally, flowers could just as well thrive with other pollinators that operate during the daytime. Or, the moth could simply adjust its biological clock to feed in better light, too. The theory, "It is, therefore it evolved" could explain anything.
Warrant also mischaracterizes natural selection as a force. Natural selection is more like a bumper than a force; it's the hub in the pinball game, not the flipper that an intelligent agent uses. It's far easier for a moth to drop the ball in the hole (i.e., go extinct) than to decide what capacities it must stretch to match the hubs in its game. The hub, certainly, cares nothing about whether the player wins or not. It's not going to tell the moth, "Pssst ... strip away everything that's not absolutely necessary, and hone what remains, and you might win!" Personifying natural selection in this way does not foster scientific understanding.
Worst of all, the evolutionary explanation presupposes the existence of highly complex traits that are available to be stripped or honed: flight, a brain, muscles -- the works. You can't hone what isn't there.
What we observe is a tightly adapted relationship between flower and moth that reaches to the limits of the possible. Any stripping or honing comes not from the environment, but from internal information encoded in the organism. Intelligent causes know how to code for robustness, so that a program can work in a variety of circumstances. Seeing this kind of software packed into a computer the size of a grain of rice makes the design inference even more compelling.
Illustra Media's documentary Metamorphosis showed in vibrant color the remarkable continent-spanning migration of the Monarch butterfly. Their new documentary, Living Waters (coming out this summer), shows dramatic examples of long-distance migration and targeting in the oceans and rivers of the earth, where the lack of visual cues makes finding the target even more demanding. The film makes powerful arguments against the abilities of natural selection, and for the explanatory fruitfulness of intelligent design
Subscribe to:
Posts (Atom)