the bible,truth,God's kingdom,Jehovah God,New World,Jehovah's Witnesses,God's church,Christianity,apologetics,spirituality.
Sunday, 6 December 2015
On tool making among subhuman primates.
Bonobos use tools on a “pre-agricultural” level?
December 5, 2015 Posted by News under Animal minds, Intelligent Design, News
From ScienceDaily:
Among other findings, a bonobo was observed for the first time making and using spears in a social setting for the purpose of attack and defense. “I believe that the current study will break down our cultural hang-up as humans concerning the inherent capabilities and potential of bonobos and chimpanzees,” says Itai Roffman of the Institute of Evolution at the University of Haifa, who undertook the study …
Interestingly, the bonobos are considered less sophisticated than their chimpanzee siblings. Chimpanzees have been observed in nature using branches to dig for tubers in the ground and to break into termite nests and beehives. As part of their cultural diversity, they have also been documented breaking nuts with hammer and anvil, and even manipulating branches into spears for use in hunting small prosimians that hide in tree hollows. By contrast, bonobos were known as a social species that engages in extensive sexual behavior and have not been observed in nature using tools. Roffman’s doctorate thesis (under the supervision of Professors Eviatar Nevo and Avraham Ronen of the University of Haifa) examines diverse pre-human/Homo characteristics among chimpanzees and bonobos. Three years ago, Roffman already managed to show that two bonobos were capable of preparing and using a range of early Homo type stone tools in order to reach inaccessible food in natural contexts. These two bonobos — famous siblings Kanzi and Pan-banisha — grew up in a human environment and have even learned to communicate using computerized English Lexigram symbols, allowing them to competently engage in rational discourse with humans.
More.
Okay. As soon as someone starts talking about “our cultural hang-up as humans concerning the inherent capabilities and potential of bonobos and chimpanzees,” we had better check the wind sock.
Chimpanzees use tools, so do ravens, and a variety of other species. So, while bonobo tool use is an interesting find, it is not a major or unexpected discovery.
It’s not clear what “pre-agricultural” means, but if we could come back fifteen thousand years from now, bonobos will probably still be doing it the same say.
The bonobos are not engaging in “rational discourse” with humans. They are using a sign language taught to them by humans. Alex the parrot also learned that. The trouble is, none of these species invent, develop, or pass on such languages, probably because they do not need them except to communicate with humans.
It’s quite possible that friendly contact with humans plays a role in how easily bonobos adapt to tools. They have hands, after all, so it isn’t difficult for them to see in principle, what could be done. A variety of animals can be taught to manipulate objects; they tend to reach a plateau that meets their needs and then stop learning*. But it is fun while it lasts.
December 5, 2015 Posted by News under Animal minds, Intelligent Design, News
From ScienceDaily:
Among other findings, a bonobo was observed for the first time making and using spears in a social setting for the purpose of attack and defense. “I believe that the current study will break down our cultural hang-up as humans concerning the inherent capabilities and potential of bonobos and chimpanzees,” says Itai Roffman of the Institute of Evolution at the University of Haifa, who undertook the study …
Interestingly, the bonobos are considered less sophisticated than their chimpanzee siblings. Chimpanzees have been observed in nature using branches to dig for tubers in the ground and to break into termite nests and beehives. As part of their cultural diversity, they have also been documented breaking nuts with hammer and anvil, and even manipulating branches into spears for use in hunting small prosimians that hide in tree hollows. By contrast, bonobos were known as a social species that engages in extensive sexual behavior and have not been observed in nature using tools. Roffman’s doctorate thesis (under the supervision of Professors Eviatar Nevo and Avraham Ronen of the University of Haifa) examines diverse pre-human/Homo characteristics among chimpanzees and bonobos. Three years ago, Roffman already managed to show that two bonobos were capable of preparing and using a range of early Homo type stone tools in order to reach inaccessible food in natural contexts. These two bonobos — famous siblings Kanzi and Pan-banisha — grew up in a human environment and have even learned to communicate using computerized English Lexigram symbols, allowing them to competently engage in rational discourse with humans.
More.
Okay. As soon as someone starts talking about “our cultural hang-up as humans concerning the inherent capabilities and potential of bonobos and chimpanzees,” we had better check the wind sock.
Chimpanzees use tools, so do ravens, and a variety of other species. So, while bonobo tool use is an interesting find, it is not a major or unexpected discovery.
It’s not clear what “pre-agricultural” means, but if we could come back fifteen thousand years from now, bonobos will probably still be doing it the same say.
The bonobos are not engaging in “rational discourse” with humans. They are using a sign language taught to them by humans. Alex the parrot also learned that. The trouble is, none of these species invent, develop, or pass on such languages, probably because they do not need them except to communicate with humans.
It’s quite possible that friendly contact with humans plays a role in how easily bonobos adapt to tools. They have hands, after all, so it isn’t difficult for them to see in principle, what could be done. A variety of animals can be taught to manipulate objects; they tend to reach a plateau that meets their needs and then stop learning*. But it is fun while it lasts.
I.D's opponents keep ignoring the scoreboard.
What is Wrong with Sober's Attack on ID? (Part III): Ignoring the Widely Discussed Positive Predictions of Intelligent Design
Casey Luskin March 30, 2007 2:30 AM
Philosopher Elliott Sober recently published an article entitled, "What is Wrong With Intelligent Design?" which claimed that intelligent design is not testable. In Part I, I rebutted Sober's early history of intelligent design. Part II explained how Sober made the curious charge that auxiliary prediction weaken the testability of a scientific theory, something which Darwinists are famous for doing. This third installment will assess Sober's characterization of ID and explain how Sober ignores positive predictions of intelligent design. Sober misses 2 key points about intelligent design, leading him to false conclusions:
(1) It's simple: intelligent design detects the past action of intelligence, nothing more, and nothing less
Sober states: "We have no independent evidence concerning which auxiliary propositions about the putative designer's goals and abilities are true." That's not correct. While the "goals" of the designer may be beyond the reach of the scientific inquiry, ID does make claims about the "abilities" of the designer. Sober then provides quotes from design-proponents, and he fails to recognize that they always refer to detecting intelligence! We understand the abilities of an intelligent agent and we understand what intelligence produces (discussed below). Sober doubly misrepresents ID: He wrongly expects ID to identify the "goals" of the designer, but then fails to recognize that ID identifies the "abilities" of the designer.
(2) Studies of intelligence show that a unique hallmark of intelligence is its ability to produce high levels of complex and specified information.
Intelligence is a feature we understand and comprehend from our studies of human intelligence in the natural world. From these studies, William Dembski explains that "the primarily, empirically verifiable thing that intelligences do is generate specified complexity." (Dembski, The Design Revolution, pg. 194). But does the generation of specified complexity make ID testable in a "comparative" sense (see Part II) with respect to neo-Darwinism? Yes, it does.
Dembski explains that natural processes like the neo-Darwinian mechanism do not generate high levels of specified complexity:
[Intelligent design is] a fully scientific claim and follows directly from the complexity-specification criterion. In particular this is not an argument from ignorance. Just as physicists reject perpetual motion machines because of what they know about the inherent constraints on energy and matter, so too design theorists reject any naturalistic reduction of specified complexity because of what they know about the inherent constraints on natural causes. Natural causes are too stupid to keep pace with intelligent causes. Intelligent design theory provides a rigorous scientific demonstration of this long-standing intuition. Let me stress, the complexity-specification criterion is not a principle that comes to us demanding our unexamined acceptance--it is not an article of faith. Rather it is the outcome of a careful and sustained argument about the precise interrelationships between necessity, chance and design.
(William Dembski, Intelligent Design: The Bridge Between Science and Theology, pg. 223 (InterVarsity Press, 1999).)
Thus, according to Dembski, intelligence produces highly specified complexity, but neo-Darwinian processes do not. Sober never mentions specified complexity once in his article, which is strange since it's such a central component of intelligent design today.
Sober Botches Irreducible Complexity
Similarly, Sober also ignores that irreducible complexity is a unique indicator of intelligent design, but he states that irreducible complexity "does nothing to test ID. For ID to be testable, it must make predictions." Claiming that irreducible complexity is nothing more than a critique of evolution, Sober writes "The fact that a different theory makes a prediction says nothing about whether ID is testable. Behe has merely changed the subject." Here, Sober is repeating the Darwinist plaintiffs' arguments in the Kitzmiller case. But Sober misrepresents ID and ignores the fact that ID theorists have argued extensively that irreducible complexity is not just a negative argument against evolution, but also a positive indicator of design. Behe writes:
[I]rreducibly complex systems such as mousetraps and flagella serve both as negative arguments against gradualistic explanations like Darwin's and as positive arguments for design. The negative argument is that such interactive systems resist explanation by the tiny steps that a Darwinian path would be expected to take. The positive argument is that their parts appear arranged to serve a purpose, which is exactly how we detect design.
(Michael Behe, Darwin's Black Box, Afterward, pgs. 263-264 (Free Press, Reprint, 2006), emphasis added.)
Similarly, Scott Minnich and Steve Meyer see that irreducible complexity is a unique, positive argument for intelligent design:
Molecular machines display a key signature or hallmark of design, namely, irreducible complexity. In all irreducibly complex systems in which the cause of the system is known by experience or observation, intelligent design or engineering played a role the origin of the system. Given that neither standard neo-Darwinism, nor co-option has adequately accounted for the origin of these machines, or the appearance of design that they manifest, one might now consider the design hypothesis as the best explanation for the origin of irreducibly complex systems in living organisms. ... Although some may argue this is a merely an argument from ignorance, we regard it as an inference to the best explanation, given what we know about the powers of intelligent as opposed to strictly natural or material causes.
(Scott A. Minnich & Stephen C. Meyer, Genetic analysis of coordinate flagellar and type III regulatory circuits in pathogenic bacteria, in Proceedings of the Second International Conference on Design & Nature.)
Incredibly, Sober makes no mention of the fact that design proponents have formulated irreducible complexity or specified complexity as positive indicators and predictions of design. He completely ignores these in order to make his central point that ID makes no positive predictions.
Casey Luskin March 30, 2007 2:30 AM
Philosopher Elliott Sober recently published an article entitled, "What is Wrong With Intelligent Design?" which claimed that intelligent design is not testable. In Part I, I rebutted Sober's early history of intelligent design. Part II explained how Sober made the curious charge that auxiliary prediction weaken the testability of a scientific theory, something which Darwinists are famous for doing. This third installment will assess Sober's characterization of ID and explain how Sober ignores positive predictions of intelligent design. Sober misses 2 key points about intelligent design, leading him to false conclusions:
(1) It's simple: intelligent design detects the past action of intelligence, nothing more, and nothing less
Sober states: "We have no independent evidence concerning which auxiliary propositions about the putative designer's goals and abilities are true." That's not correct. While the "goals" of the designer may be beyond the reach of the scientific inquiry, ID does make claims about the "abilities" of the designer. Sober then provides quotes from design-proponents, and he fails to recognize that they always refer to detecting intelligence! We understand the abilities of an intelligent agent and we understand what intelligence produces (discussed below). Sober doubly misrepresents ID: He wrongly expects ID to identify the "goals" of the designer, but then fails to recognize that ID identifies the "abilities" of the designer.
(2) Studies of intelligence show that a unique hallmark of intelligence is its ability to produce high levels of complex and specified information.
Intelligence is a feature we understand and comprehend from our studies of human intelligence in the natural world. From these studies, William Dembski explains that "the primarily, empirically verifiable thing that intelligences do is generate specified complexity." (Dembski, The Design Revolution, pg. 194). But does the generation of specified complexity make ID testable in a "comparative" sense (see Part II) with respect to neo-Darwinism? Yes, it does.
Dembski explains that natural processes like the neo-Darwinian mechanism do not generate high levels of specified complexity:
[Intelligent design is] a fully scientific claim and follows directly from the complexity-specification criterion. In particular this is not an argument from ignorance. Just as physicists reject perpetual motion machines because of what they know about the inherent constraints on energy and matter, so too design theorists reject any naturalistic reduction of specified complexity because of what they know about the inherent constraints on natural causes. Natural causes are too stupid to keep pace with intelligent causes. Intelligent design theory provides a rigorous scientific demonstration of this long-standing intuition. Let me stress, the complexity-specification criterion is not a principle that comes to us demanding our unexamined acceptance--it is not an article of faith. Rather it is the outcome of a careful and sustained argument about the precise interrelationships between necessity, chance and design.
(William Dembski, Intelligent Design: The Bridge Between Science and Theology, pg. 223 (InterVarsity Press, 1999).)
Thus, according to Dembski, intelligence produces highly specified complexity, but neo-Darwinian processes do not. Sober never mentions specified complexity once in his article, which is strange since it's such a central component of intelligent design today.
Sober Botches Irreducible Complexity
Similarly, Sober also ignores that irreducible complexity is a unique indicator of intelligent design, but he states that irreducible complexity "does nothing to test ID. For ID to be testable, it must make predictions." Claiming that irreducible complexity is nothing more than a critique of evolution, Sober writes "The fact that a different theory makes a prediction says nothing about whether ID is testable. Behe has merely changed the subject." Here, Sober is repeating the Darwinist plaintiffs' arguments in the Kitzmiller case. But Sober misrepresents ID and ignores the fact that ID theorists have argued extensively that irreducible complexity is not just a negative argument against evolution, but also a positive indicator of design. Behe writes:
[I]rreducibly complex systems such as mousetraps and flagella serve both as negative arguments against gradualistic explanations like Darwin's and as positive arguments for design. The negative argument is that such interactive systems resist explanation by the tiny steps that a Darwinian path would be expected to take. The positive argument is that their parts appear arranged to serve a purpose, which is exactly how we detect design.
(Michael Behe, Darwin's Black Box, Afterward, pgs. 263-264 (Free Press, Reprint, 2006), emphasis added.)
Similarly, Scott Minnich and Steve Meyer see that irreducible complexity is a unique, positive argument for intelligent design:
Molecular machines display a key signature or hallmark of design, namely, irreducible complexity. In all irreducibly complex systems in which the cause of the system is known by experience or observation, intelligent design or engineering played a role the origin of the system. Given that neither standard neo-Darwinism, nor co-option has adequately accounted for the origin of these machines, or the appearance of design that they manifest, one might now consider the design hypothesis as the best explanation for the origin of irreducibly complex systems in living organisms. ... Although some may argue this is a merely an argument from ignorance, we regard it as an inference to the best explanation, given what we know about the powers of intelligent as opposed to strictly natural or material causes.
(Scott A. Minnich & Stephen C. Meyer, Genetic analysis of coordinate flagellar and type III regulatory circuits in pathogenic bacteria, in Proceedings of the Second International Conference on Design & Nature.)
Incredibly, Sober makes no mention of the fact that design proponents have formulated irreducible complexity or specified complexity as positive indicators and predictions of design. He completely ignores these in order to make his central point that ID makes no positive predictions.
Machine code Vs. Darwinism.
Time Machine: An Early Argument for Intelligent Design
Granville Sewell December 1, 2015 5:45 PM
As I begin my 12th year of work on TWODEPEP (now PDE/PROTRAN), I am intrigued by the analogy between the 11-year evolution of this computer code and the multi-billion year history of the genetic code of life, which contains a blueprint for a species encoded into billions of bits of information. Like the code of life, TWODEPEP began with primitive features, being capable of solving only a single linear elliptic equation in polygonal regions, with simple boundary conditions. It passed through many useful stages as it adapted to non-linear and time-dependent problems, systems of PDEs, eigenvalue problems, and as it evolved cubic and quartic elements and isoparametric elements for curved boundaries. It grew a preprocessor and a graphical output package, and out-of-core frontal and conjugate gradient methods were added to solve the linear systems.
Each of these changes represented major evolutionary steps -- new orders, classes or phyla, if you will. The conjugate gradient method, in turn, also passed through several less major variations as the basic method was modified to precondition the matrix, to handle nonsymmetric systems, and as stopping criteria were altered, etc. Some of these variations might be considered new families, some new genera, and some only special changes.
I see one flaw in the analogy, however. While I am told that the DNA code was designed by a natural process capable of recognizing improvements but incapable of planning beyond the next random mutation, I find it difficult to believe that TWODEPEP could have been designed by a programmer incapable of thinking ahead more than a few characters at a time.
But perhaps, it might be suggested, a programmer capable of making only random changes, but quite skilled at recognizing improvements could, given 4.5 billion years to work on it, evolve such a program. A few simple calculations would convince him that this programmer would have to rely on very tiny improvements. For example, if he could produce a billion random "mutations" per second (or, for a better analogy, suppose a billion programmers could produce one "mutation" per second each), he could not, statistically, hope to produce any predetermined 20-character improvement during this time period. Could such a programmer, with no programming or mathematical skills other than the ability to recognize and select out very small improvements through testing, design a sophisticated finite element program?
The Darwinist would presumably say, yes, but to anyone who has had minimal programming experience such an idea is preposterous. The major changes to TWODEPEP, such as the addition of a new linear equation solver or new element, required the addition or modification of hundreds of lines of code before the new feature was functional. None of the changes made during this period were of any use whatever until all were in place.
Even the smallest modifications to that new feature, once it was functional, required adding several lines, no one of which made any sense, or provided any "selective advantage," when added by itself.
Consider, by way of analogy, the airtight trap of the carnivorous bladderwort plant, which has a double sealed, valve-like door which is opened when a trigger hair is activated, causing the victim to be sucked into the vacuum of the trap (described by R.F. Daubenmire in Plants and Environment, John Wiley and Sons, N.Y. 1947). It is difficult to see what selective advantage this trap provided until it was almost perfect.
This, then, is the fallacy of Darwin's explanation for the causes of evolution -- the idea that major (complex) improvements can be broken down into many minor improvements. French biologist Jean Rostand, in A Biologist's View (William Heinemann Ltd., London, 1956) recognized this:
It does not seem strictly impossible that mutations should have introduced into the animal kingdom the differences which exist between one species and the next...hence it is very tempting to lay also at their door the differences between classes, families and orders, and, in short, the whole of evolution. But it is obvious that such an extrapolation involves the gratuitous attribution to the mutations of the past of a magnitude and power of innovation much greater than is shown by those of today.
The famous "problem of novelties" is another formulation of the objection raised here. How can natural selection cause new organs to arise and guide their development through the initial stages during which they present no selective advantage, the argument goes. The Darwinist is forced to argue that there are no useless stages. He believes that new organs and new systems of organs arose gradually, through many small improvements. But this is like saying that TWODEPEP could have made the transition from a single PDE to systems of PDEs through many five or six character improvements, each of which made it work slightly better on systems.
It is interesting to note that this belief is not supported even by the fossil evidence. Harvard paleontologist George Gaylord Simpson, for example, in The History of Life, Volume II of Evolution after Darwin, (University of Chicago Press, 1960) points out:
It is a feature of the known fossil record that most taxa appear abruptly. They are not, as a rule, led up to by a sequence of almost imperceptibly changing forerunners such as Darwin believed should be usual in evolution...This phenomenon becomes more universal and more intense as the hierarchy of categories is ascended. Gaps among known species are sporadic and often small. Gaps among known orders, classes and phyla are systematic and almost always large. These peculiarities of the record pose one of the most important theoretical problems in the whole history of life: Is the sudden appearance of higher categories a phenomenon of evolution or of the record only, due to sampling bias and other inadequacies?
Another way of describing this same structure is expressed in a recent Life magazine article (Francis Hitching, "Was Darwin Wrong on Evolution?", April 1982, which concludes that "natural selection has been tested and found wanting") which focuses on the "curious consistency" of the fossil gaps:
These are not negligible gaps. They are periods, in all the major evolutionary transitions, when immense physiological changes had to take place.
Unless we are willing to believe that useless, "developing" organs (and insect traps which could almost catch insects) abounded in the past, we should have expected the fossil structure outlined above, with large gaps between the higher categories, where new organs and new systems of organs appeared.
Nevertheless, despite the fact that the structure of the fossil record is the only argument against Darwin which has received much attention lately, this is not the real issue. The "problem of novelties" correctly states the real argument, but too weakly. Consider, for example, the human eye, with an aperture whose size varies automatically according to the light intensity, controlled by reflex signals from the brain; with a lens whose curvature varies automatically according to the distance to the object in view; and with a retina which receives the picture on color sensitive cells and transmits it, complete with coded intensity and frequency information, through the optic nerve to the brain. The brain superimposes the pictures from the two eyes and stores this 3D picture somehow in memory, and it will be able to search for and recall this image later and use it to recognize an older but familiar face in a different picture. Like TWODEPEP, the eye has passed through various useful stages in its development, but it contains a large number of features which could not reach usefulness in a single random mutation and which provided no selective advantage until useful (e.g. the nerves and arteries which service it), and many groups of features which are useless individually. The Darwinist may bridge the gaps between taxa with a long chain of tiny improvements in his imagination, but the analogy with software puts his ideas into perspective. The idea that all the magnificent species in the living world, or the human brain with its human consciousness, could have arisen from simple organic molecules guided by a natural process unable to plan beyond the next tiny mutation, is entirely comparable to the idea that a programmer incapable of thinking ahead more than a few characters at a time could, given a lot of time, design any sophisticated computer program.
I suggest that, with Jean Rostand, "we must have the courage to recognize that we know nothing of the mechanism" of evolution.
Granville Sewell December 1, 2015 5:45 PM
As I begin my 12th year of work on TWODEPEP (now PDE/PROTRAN), I am intrigued by the analogy between the 11-year evolution of this computer code and the multi-billion year history of the genetic code of life, which contains a blueprint for a species encoded into billions of bits of information. Like the code of life, TWODEPEP began with primitive features, being capable of solving only a single linear elliptic equation in polygonal regions, with simple boundary conditions. It passed through many useful stages as it adapted to non-linear and time-dependent problems, systems of PDEs, eigenvalue problems, and as it evolved cubic and quartic elements and isoparametric elements for curved boundaries. It grew a preprocessor and a graphical output package, and out-of-core frontal and conjugate gradient methods were added to solve the linear systems.
Each of these changes represented major evolutionary steps -- new orders, classes or phyla, if you will. The conjugate gradient method, in turn, also passed through several less major variations as the basic method was modified to precondition the matrix, to handle nonsymmetric systems, and as stopping criteria were altered, etc. Some of these variations might be considered new families, some new genera, and some only special changes.
I see one flaw in the analogy, however. While I am told that the DNA code was designed by a natural process capable of recognizing improvements but incapable of planning beyond the next random mutation, I find it difficult to believe that TWODEPEP could have been designed by a programmer incapable of thinking ahead more than a few characters at a time.
But perhaps, it might be suggested, a programmer capable of making only random changes, but quite skilled at recognizing improvements could, given 4.5 billion years to work on it, evolve such a program. A few simple calculations would convince him that this programmer would have to rely on very tiny improvements. For example, if he could produce a billion random "mutations" per second (or, for a better analogy, suppose a billion programmers could produce one "mutation" per second each), he could not, statistically, hope to produce any predetermined 20-character improvement during this time period. Could such a programmer, with no programming or mathematical skills other than the ability to recognize and select out very small improvements through testing, design a sophisticated finite element program?
The Darwinist would presumably say, yes, but to anyone who has had minimal programming experience such an idea is preposterous. The major changes to TWODEPEP, such as the addition of a new linear equation solver or new element, required the addition or modification of hundreds of lines of code before the new feature was functional. None of the changes made during this period were of any use whatever until all were in place.
Even the smallest modifications to that new feature, once it was functional, required adding several lines, no one of which made any sense, or provided any "selective advantage," when added by itself.
Consider, by way of analogy, the airtight trap of the carnivorous bladderwort plant, which has a double sealed, valve-like door which is opened when a trigger hair is activated, causing the victim to be sucked into the vacuum of the trap (described by R.F. Daubenmire in Plants and Environment, John Wiley and Sons, N.Y. 1947). It is difficult to see what selective advantage this trap provided until it was almost perfect.
This, then, is the fallacy of Darwin's explanation for the causes of evolution -- the idea that major (complex) improvements can be broken down into many minor improvements. French biologist Jean Rostand, in A Biologist's View (William Heinemann Ltd., London, 1956) recognized this:
It does not seem strictly impossible that mutations should have introduced into the animal kingdom the differences which exist between one species and the next...hence it is very tempting to lay also at their door the differences between classes, families and orders, and, in short, the whole of evolution. But it is obvious that such an extrapolation involves the gratuitous attribution to the mutations of the past of a magnitude and power of innovation much greater than is shown by those of today.
The famous "problem of novelties" is another formulation of the objection raised here. How can natural selection cause new organs to arise and guide their development through the initial stages during which they present no selective advantage, the argument goes. The Darwinist is forced to argue that there are no useless stages. He believes that new organs and new systems of organs arose gradually, through many small improvements. But this is like saying that TWODEPEP could have made the transition from a single PDE to systems of PDEs through many five or six character improvements, each of which made it work slightly better on systems.
It is interesting to note that this belief is not supported even by the fossil evidence. Harvard paleontologist George Gaylord Simpson, for example, in The History of Life, Volume II of Evolution after Darwin, (University of Chicago Press, 1960) points out:
It is a feature of the known fossil record that most taxa appear abruptly. They are not, as a rule, led up to by a sequence of almost imperceptibly changing forerunners such as Darwin believed should be usual in evolution...This phenomenon becomes more universal and more intense as the hierarchy of categories is ascended. Gaps among known species are sporadic and often small. Gaps among known orders, classes and phyla are systematic and almost always large. These peculiarities of the record pose one of the most important theoretical problems in the whole history of life: Is the sudden appearance of higher categories a phenomenon of evolution or of the record only, due to sampling bias and other inadequacies?
Another way of describing this same structure is expressed in a recent Life magazine article (Francis Hitching, "Was Darwin Wrong on Evolution?", April 1982, which concludes that "natural selection has been tested and found wanting") which focuses on the "curious consistency" of the fossil gaps:
These are not negligible gaps. They are periods, in all the major evolutionary transitions, when immense physiological changes had to take place.
Unless we are willing to believe that useless, "developing" organs (and insect traps which could almost catch insects) abounded in the past, we should have expected the fossil structure outlined above, with large gaps between the higher categories, where new organs and new systems of organs appeared.
Nevertheless, despite the fact that the structure of the fossil record is the only argument against Darwin which has received much attention lately, this is not the real issue. The "problem of novelties" correctly states the real argument, but too weakly. Consider, for example, the human eye, with an aperture whose size varies automatically according to the light intensity, controlled by reflex signals from the brain; with a lens whose curvature varies automatically according to the distance to the object in view; and with a retina which receives the picture on color sensitive cells and transmits it, complete with coded intensity and frequency information, through the optic nerve to the brain. The brain superimposes the pictures from the two eyes and stores this 3D picture somehow in memory, and it will be able to search for and recall this image later and use it to recognize an older but familiar face in a different picture. Like TWODEPEP, the eye has passed through various useful stages in its development, but it contains a large number of features which could not reach usefulness in a single random mutation and which provided no selective advantage until useful (e.g. the nerves and arteries which service it), and many groups of features which are useless individually. The Darwinist may bridge the gaps between taxa with a long chain of tiny improvements in his imagination, but the analogy with software puts his ideas into perspective. The idea that all the magnificent species in the living world, or the human brain with its human consciousness, could have arisen from simple organic molecules guided by a natural process unable to plan beyond the next tiny mutation, is entirely comparable to the idea that a programmer incapable of thinking ahead more than a few characters at a time could, given a lot of time, design any sophisticated computer program.
I suggest that, with Jean Rostand, "we must have the courage to recognize that we know nothing of the mechanism" of evolution.
Rubiks Cube Vs. Materialism.
Rubik's Cube Is a Hand-Sized Illustration of Intelligent Design
Evolution News & Views December 2, 2015 4:15 AM
For those who have not made it a favorite pastime, solving a Rubik's Cube just adds unneeded stress to life. It's frustrating to twist and turn those colors, getting some to match but finding out your last move un-matched colors you had previously matched. Then to find some kid on TV doing it in seconds is enough to send you outside screaming. The world record is now 4.904 seconds by Lucas Etter, a teenager in Maryland, who set the record on November 24.
The cube has over 43 quintillion possible color combinations, mathematicians Tomas Rokiki and Morley Davidson tell us, but only one solution. For those who have screamed enough at these dastardly devices, mathematician Geoff Smith has posted the secret at The Conversation: "How to solve a Rubik's cube in 5 seconds." (It's not really fair to divulge this. We're supposed to be smart enough to figure it out on our own. But we've had enough. Help us! What is it?)
So how do the likes of Lucas Etter work out how to solve Rubik's cube so quickly? They could read instructions, but that rather spoils the fun. If you want to work out how to do it yourself, you need to develop cube-solving tools. [Emphasis added.]
Now isn't that helpful. How to open a can? Develop a can-opening tool. Gee, thanks.
In this sense, a tool is a short sequence of turns which results in only a few of the individual squares on the cube's faces changing position. When you have discovered and memorised enough tools, you can execute them one after the other in order as required to return the cube to its pristine, solved condition.
If you think the secret is going to be easy, keep reading. After defining mathematical groups and commutators, Smith takes us into the labyrinth without a string. We expect the Minotaur to arrive any moment.
Think of the overall structure of the different configurations of a Rubik's cube as a labyrinth, which has that many chambers, each of which contains a Rubik's cube in the state which corresponds to that chamber. From each chamber there are 12 doors leading to other chambers, each door corresponding to a quarter turn of one of the six faces of a cube.
"You are in a maze of twisty passages, all alike." We gave that game up in 1992.
The type of turn needed to pass through each door is written above it, so you know which door is which. Your job is to navigate your way from a particular chamber to the one where the cube on the table is in perfect condition.
Aaarggh! We knew that! Our job is to solve the cube! We plead for mercy.
The mathematical result in Rokicki and Davidson's paper shows that, no matter where you are in the labyrinth, it's possible to reach the winning chamber by passing through at most 26 doors -- although the route you find using your tools is not likely to be that efficient.
Now we begin to see a glimpse of the light out of the labyrinth. 26 doors? Tough, but accessible. Actually, the mathematicians have updated "God's number" as they call it to 20. We could do that. Not blindfolded, though, like some winners Smith talks about. But there's no way around memorizing a lot of moves.
A Useful Instructional Aid
For those interested in explaining ID to people without a lot of memory work, the Rubik's Cube can be a useful instructional aid. You don't have to master the art of solving it. Save your sanity; just buy two cubes, and don't touch the solved one. Lock it into a plastic case if you have to, so that you won't have to try all 43 quintillion combinations in front of your audience. Or, rent a kid who can fix it in a few seconds.
Explain that the cube is a search problem. Take the scrambled one, and show how you want to get from that one to the solved one. You need a search algorithm. Which approach is more likely to find the solution -- intelligent causes or unguided causes? The answer is obvious, but go ahead; rub it in. A robot randomly moving the colors around could conceivably hit on the solution by chance in short order with sheer dumb luck (1 chance in 43 x 1018), but even if it did, it would most likely keep rotating the colors right back out of order again, not caring a dime. It would take an intelligent agent to recognize the solution and stop the robot when it gets the solution by chance.
More likely, it would take a long, long time. Trying all 43 x 1018 combinations at 1 per second would take 1.3 trillion years. The robot would have a 50-50 chance of getting the solution in half that time, but it would already vastly exceed the time available (about forty times the age of the universe). If a secular materialist counters that there could be trillions of robots with trillions of cubes working simultaneously throughout the cosmos, ask what the chance is of getting any two winners on the same planet at the same place and time. The one concession blocks the other. And what in the materialist's unguided universe is going to stop any robot when it succeeds? The vast majority will never succeed during the age of the universe.
Now rub it in. It would vastly exceed the age of the known universe for a robot to solve the cube by sheer dumb luck. How fast can an intelligent cause solve it? 4.904 seconds. That's the power of intelligent causes over unguided causes.
Now really, really rub it in. The Rubik's cube is simple compared to a protein. Imagine solving a cube with 20 colors and 100 sides. Then imagine solving hundreds of different such cubes, each with its own solution, simultaneously in the same place at the same time. If the audience doesn't run outside screaming, you didn't speak slowly enough.
See? You didn't even need to solve it yourself to make a powerful, visual statement.
Evolution News & Views December 2, 2015 4:15 AM
For those who have not made it a favorite pastime, solving a Rubik's Cube just adds unneeded stress to life. It's frustrating to twist and turn those colors, getting some to match but finding out your last move un-matched colors you had previously matched. Then to find some kid on TV doing it in seconds is enough to send you outside screaming. The world record is now 4.904 seconds by Lucas Etter, a teenager in Maryland, who set the record on November 24.
The cube has over 43 quintillion possible color combinations, mathematicians Tomas Rokiki and Morley Davidson tell us, but only one solution. For those who have screamed enough at these dastardly devices, mathematician Geoff Smith has posted the secret at The Conversation: "How to solve a Rubik's cube in 5 seconds." (It's not really fair to divulge this. We're supposed to be smart enough to figure it out on our own. But we've had enough. Help us! What is it?)
So how do the likes of Lucas Etter work out how to solve Rubik's cube so quickly? They could read instructions, but that rather spoils the fun. If you want to work out how to do it yourself, you need to develop cube-solving tools. [Emphasis added.]
Now isn't that helpful. How to open a can? Develop a can-opening tool. Gee, thanks.
In this sense, a tool is a short sequence of turns which results in only a few of the individual squares on the cube's faces changing position. When you have discovered and memorised enough tools, you can execute them one after the other in order as required to return the cube to its pristine, solved condition.
If you think the secret is going to be easy, keep reading. After defining mathematical groups and commutators, Smith takes us into the labyrinth without a string. We expect the Minotaur to arrive any moment.
Think of the overall structure of the different configurations of a Rubik's cube as a labyrinth, which has that many chambers, each of which contains a Rubik's cube in the state which corresponds to that chamber. From each chamber there are 12 doors leading to other chambers, each door corresponding to a quarter turn of one of the six faces of a cube.
"You are in a maze of twisty passages, all alike." We gave that game up in 1992.
The type of turn needed to pass through each door is written above it, so you know which door is which. Your job is to navigate your way from a particular chamber to the one where the cube on the table is in perfect condition.
Aaarggh! We knew that! Our job is to solve the cube! We plead for mercy.
The mathematical result in Rokicki and Davidson's paper shows that, no matter where you are in the labyrinth, it's possible to reach the winning chamber by passing through at most 26 doors -- although the route you find using your tools is not likely to be that efficient.
Now we begin to see a glimpse of the light out of the labyrinth. 26 doors? Tough, but accessible. Actually, the mathematicians have updated "God's number" as they call it to 20. We could do that. Not blindfolded, though, like some winners Smith talks about. But there's no way around memorizing a lot of moves.
A Useful Instructional Aid
For those interested in explaining ID to people without a lot of memory work, the Rubik's Cube can be a useful instructional aid. You don't have to master the art of solving it. Save your sanity; just buy two cubes, and don't touch the solved one. Lock it into a plastic case if you have to, so that you won't have to try all 43 quintillion combinations in front of your audience. Or, rent a kid who can fix it in a few seconds.
Explain that the cube is a search problem. Take the scrambled one, and show how you want to get from that one to the solved one. You need a search algorithm. Which approach is more likely to find the solution -- intelligent causes or unguided causes? The answer is obvious, but go ahead; rub it in. A robot randomly moving the colors around could conceivably hit on the solution by chance in short order with sheer dumb luck (1 chance in 43 x 1018), but even if it did, it would most likely keep rotating the colors right back out of order again, not caring a dime. It would take an intelligent agent to recognize the solution and stop the robot when it gets the solution by chance.
More likely, it would take a long, long time. Trying all 43 x 1018 combinations at 1 per second would take 1.3 trillion years. The robot would have a 50-50 chance of getting the solution in half that time, but it would already vastly exceed the time available (about forty times the age of the universe). If a secular materialist counters that there could be trillions of robots with trillions of cubes working simultaneously throughout the cosmos, ask what the chance is of getting any two winners on the same planet at the same place and time. The one concession blocks the other. And what in the materialist's unguided universe is going to stop any robot when it succeeds? The vast majority will never succeed during the age of the universe.
Now rub it in. It would vastly exceed the age of the known universe for a robot to solve the cube by sheer dumb luck. How fast can an intelligent cause solve it? 4.904 seconds. That's the power of intelligent causes over unguided causes.
Now really, really rub it in. The Rubik's cube is simple compared to a protein. Imagine solving a cube with 20 colors and 100 sides. Then imagine solving hundreds of different such cubes, each with its own solution, simultaneously in the same place at the same time. If the audience doesn't run outside screaming, you didn't speak slowly enough.
See? You didn't even need to solve it yourself to make a powerful, visual statement.
Darwinism Vs.Animal consciousness.
What Can We Hope to Learn About Animal Minds?
Denyse O'Leary December 4, 2015 3:36 AM
Human consciousness is difficult to define and "arguably the central issue in current theorizing about the mind," even though we experience it all our waking hours. If we can't even define our own consciousness, can we say whether a different type of life form has consciousness or a mind?
Some current philosophers have reasoned away the problem by positing that rocks have minds too. Their approach is summarized by New York Times writer Jim Holt as follows:
We are biological beings. We exist because of self-replicating chemicals. We detect and act on information from our environment so that the self-replication will continue. As a byproduct, we have developed brains that, we fondly believe, are the most intricate things in the universe. We look down our noses at brute matter.
Rocks, we are told, are full of chemical information, and in philosopher David Chalmers's slogan, "Experience is information from the inside; physics is information from the outside."
On that view, it is simply impossible to demarcate anything between rocks and humans as a threshold of consciousness. That approach, if it lacks other merit, reveals the difficulty that consciousness creates for naturalism, the idea that nature is all that exists.
Consciousness (a mind) perceives and acts on information. But there are at least two -- more basic and probably unconscious qualities -- that distinguish life from non-life, and seem to act by processing information: self-preservation and adaptability.
Life forms constantly try to preserve themselves in a living state -- that is, they try to survive. They adapt their methods as needed, whenever possible. A rock falls from a high cliff and breaks; a cat has somehow learned to relax, turn in mid-air, and land on his feet. Or consider Slijper's goat and Faith the dog, both of whom, born without forelegs, adapted to a lifestyle that is quite unnatural for their species.
But why do life forms struggle so hard to remain alive when the option of simply dying -- ceasing to be a life form at all, and rejoining the chemical seas -- is readily available, and eventually inevitable?
Naturalist explanations don't turn out to be much help with any of this. Polymath Christoph Adami, interviewed in Quanta Magazine, sees life itself as "self-perpetuating information strings," and defines information as "the ability to make predictions with a likelihood better than chance."
As it happens, he also thinks that "the first piece of information has to have arisen by chance":
On the one hand, the problem is easy; on the other, it's difficult. We don't know what that symbolic language was at the origins of life. It could have been RNA or any other set of molecules. But it has to have been an alphabet. The easy part is asking simply what the likelihood of life is, given absolutely no knowledge of the distribution of the letters of the alphabet. In other words, each letter of the alphabet is at your disposal with equal frequency.
So an alphabet arose by chance? Adami places some confidence in the idea that upheavals around volcanic vents began that alphabet. One can't help but wonder why volcanoes work so differently now.
Taking a slightly different tack, theoretical biologist Kalevi Kull, author of Towards a Semiotic Biology: Life Is the Action of Signs, asks whether life is a form of signaling. Perhaps so, but signaling places us in the world of purpose, not random events.
Communication begins far below the level of the whole life form. One can hardly talk about the genome now, it seems, without an understanding of its complex grammar, "more complex than that of even the most intricately constructed spoken languages in the world" according to Karolinska researchers:
Their analysis reveals that the grammar of the genetic code is much more complex than that of even the most complex human languages. Instead of simply joining two words together by deleting a space, the individual words that are joined together in compound DNA words are altered, leading to a large number of completely new words.
On Adami's view, all this purpose, adaptation, information, signaling, and language originates in the random creation of an alphabet in the ferment around a volcano. Such a position is forced by the claims of naturalism, but is in no way compelled by evidence.
And it all happens whether there is consciousness or not. We experience consciousness, so we assume that other humans do. When we say, informally, that an animal is conscious, we mean that its behavior suggests that it is aware of its own needs, sensations, and environment, of self vs. not-self, of relationships with "not-selves," and such.
So can we determine accurately whether life forms are conscious? Brainless jellyfish, for example, are now thought to act with purpose when fishing.
Are the jellyfish conscious of that purpose? That would amount to having a mind without a brain. But the actual relationship between mind and brain is not -- as we shall see -- as straightforward as was once supposed. For one thing, there does not seem to be a "tree of intelligence," in the sense of a completely consistent correlation between size/type of brain and observed intelligence. We might be best to stick with observation for now, and defer classification till later.
Life forms communicate with each other to a degree that often surprises researchers. Prey animals, for example, warn predators of the danger of eating them or advise other prey that a hiding place is taken. But evidence suggests that plants can communicate too. The Scientist tells us:
Researchers are unearthing evidence that, far from being unresponsive and uncommunicative organisms, plants engage in regular conversation. In addition to warning neighbors of herbivore attacks, they alert each other to threatening pathogens and impending droughts, and even recognize kin, continually adapting to the information they receive from plants growing around them. Moreover, plants can "talk" in several different ways: via airborne chemicals, soluble compounds exchanged by roots and networks of threadlike fungi, and perhaps even ultrasonic sounds. Plants, it seems, have a social life that scientists are just beginning to understand.
So while communication, like purpose, is everywhere, the degree to which a life form is conscious of itself in communication with another life form is still elusive.
Naturalists who do not define the problem out of existence by insisting that even rocks have minds have generally adopted another approach. They try to find animal behaviors that are such close equivalents to human behavior that all such behaviors can be lumped together as, in Francis Crick's phrase, "nothing but a pack of neurons." Nothing remains but to provide a naturalist account, like Adami's, as to how they originally came to be a pack.
What the naturalists are doing is called anthropomorphism -- ascribing human qualities to life forms that may experience life very differently. It was once the province of folk tales. Not today. This year, we were told in the science press that bacteria have morals:
Far from being selfish organisms whose sole purpose is to maximize their own reproduction, bacteria in large communities work for the greater good by resolving a social conflict among individuals to enhance the survival of their entire community.
This finding supports group selection rather than the selfish gene. But it raises questions: Do bacteria have a mind in the absence of a brain? Or is group selection a purposeful force that can operate (as if by magic), in the absence of a mind or brain? Or is the concept of "morals" in fact specific to the human mind, in which case bacteria are not best described as acting that way even by analogy, lest the description become misleading?
In a similar vein, philosopher Stephen Cave argues that animals have free will, proposing to measure it by an FQ, a freedom quotient analogous to IQ:
Experimenters measure this ability by testing how long an animal can resist a small treat in return for a larger reward after a delay. Chickens, for example, can do this for six seconds. They can choose whether to wait for the juicier titbit or not -- but only if that titbit comes very soon. A chimpanzee, on the other hand, can wait for a cool two minutes -- or even up to eight minutes in some experiments. I am guessing that you could manage a lot longer.
Without knowing who the "you" is, I'm not betting anything. Not so sure I'd bet on an unknown chimp either, given the spread cited above.
The research sounds fascinating, but Cave then tells us that
... all around us, every day, we see a very natural kind of freedom -- one that is completely compatible with determinism. It is the kind that living things need to pursue their goals in a world that continually presents them with multiple possibilities.
Obviously, if "free will" is completely compatible with determinism (and Darwinism, he tells us, accounts for that) then free will as we traditionally understand it doesn't exist and can't be measured in any life form. So why claim it does, and can?
Cave's thesis about free will differs significantly from the observation above that jellyfish pursue fish with intent: The intentional behavior of jellyfish is observed; we simply don't know if they know their own intentions. Cave, by contrast, wants to account for human as well as animal intentions as entirely determined while appearing free -- in order to support a fully naturalist perspective.
These two approaches to gathering information are likely to come into increasing conflict. Is the purpose of gathering the information to find out what is going on or to provide support for naturalism? A naturalist will probably answer "Both!" A materialist assumes that any finding can be interpreted from his perspective, however bad the fit, and will keep tinkering until he find a somewhat better fit. The rest of us are prepared to look around and see there may be a better explanation.
Similarly, consider the recent research on why hive workers sometimes kill their queens:
"Workers are assessing the situation in their colony and deciding to revolt against the queen only when the genetic makeup of the colony makes it favorable to do so," Loope said. "The main advantage is to allow your sister workers to lay male eggs, rather than the queen, who typically stops worker reproduction by egg eating, attacking reproducing workers, and by laying many of her own eggs. By eliminating the queen, a matricidal worker allows other workers and herself to lay male eggs."
...
"Hence the matricide," Loope said. "Workers are not mindless automatons working for the queen no matter what. They only altruistically give up reproduction when the context is right, but revolt when it benefits them to do so."
These short quotations from a longer account convey only in part the detailed reasoning hypothesized for the insects. That raises an obvious question: Who or what exactly is doing the reasoning? Some would say natural selection. But natural selection -- the fact that some life forms survive and pass on their traits, while others don't -- must be one of two things. Either it is a mind suited to detailed calculations of self-interest. Or it has somehow produced in the insects' minds capable of such a feat without, so far as we can see, having the brains to match.
Darwin believed that natural selection was the acting agent:
... natural selection is daily and hourly scrutinizing, throughout the world, every variation, even the slightest; rejecting that which is bad, preserving and adding up all that is good; silently and insensibly working, wherever and whenever opportunity offers, at the improvement of each organic being in relation to its organic and inorganic conditions of life.
But Darwin wrote in an age when the behavior of life forms was not known to be this complex. He most likely had much simpler scenarios in view, squabbles over a kill perhaps.
Fortunately, we may not need to make sense of the current state of naturalism to gain at least some insights into animal mind. We humans have a sense of "self" that goes well beyond a drive to continue to exist. But to what extent do other life forms have this sense? Recent decades of research on apes and monkeys can give us some sense of the territory we are entering.
Next time, we'll discuss some of the hope, hype, and hard data about ape and monkey minds.
Denyse O'Leary December 4, 2015 3:36 AM
Human consciousness is difficult to define and "arguably the central issue in current theorizing about the mind," even though we experience it all our waking hours. If we can't even define our own consciousness, can we say whether a different type of life form has consciousness or a mind?
Some current philosophers have reasoned away the problem by positing that rocks have minds too. Their approach is summarized by New York Times writer Jim Holt as follows:
We are biological beings. We exist because of self-replicating chemicals. We detect and act on information from our environment so that the self-replication will continue. As a byproduct, we have developed brains that, we fondly believe, are the most intricate things in the universe. We look down our noses at brute matter.
Rocks, we are told, are full of chemical information, and in philosopher David Chalmers's slogan, "Experience is information from the inside; physics is information from the outside."
On that view, it is simply impossible to demarcate anything between rocks and humans as a threshold of consciousness. That approach, if it lacks other merit, reveals the difficulty that consciousness creates for naturalism, the idea that nature is all that exists.
Consciousness (a mind) perceives and acts on information. But there are at least two -- more basic and probably unconscious qualities -- that distinguish life from non-life, and seem to act by processing information: self-preservation and adaptability.
Life forms constantly try to preserve themselves in a living state -- that is, they try to survive. They adapt their methods as needed, whenever possible. A rock falls from a high cliff and breaks; a cat has somehow learned to relax, turn in mid-air, and land on his feet. Or consider Slijper's goat and Faith the dog, both of whom, born without forelegs, adapted to a lifestyle that is quite unnatural for their species.
But why do life forms struggle so hard to remain alive when the option of simply dying -- ceasing to be a life form at all, and rejoining the chemical seas -- is readily available, and eventually inevitable?
Naturalist explanations don't turn out to be much help with any of this. Polymath Christoph Adami, interviewed in Quanta Magazine, sees life itself as "self-perpetuating information strings," and defines information as "the ability to make predictions with a likelihood better than chance."
As it happens, he also thinks that "the first piece of information has to have arisen by chance":
On the one hand, the problem is easy; on the other, it's difficult. We don't know what that symbolic language was at the origins of life. It could have been RNA or any other set of molecules. But it has to have been an alphabet. The easy part is asking simply what the likelihood of life is, given absolutely no knowledge of the distribution of the letters of the alphabet. In other words, each letter of the alphabet is at your disposal with equal frequency.
So an alphabet arose by chance? Adami places some confidence in the idea that upheavals around volcanic vents began that alphabet. One can't help but wonder why volcanoes work so differently now.
Taking a slightly different tack, theoretical biologist Kalevi Kull, author of Towards a Semiotic Biology: Life Is the Action of Signs, asks whether life is a form of signaling. Perhaps so, but signaling places us in the world of purpose, not random events.
Communication begins far below the level of the whole life form. One can hardly talk about the genome now, it seems, without an understanding of its complex grammar, "more complex than that of even the most intricately constructed spoken languages in the world" according to Karolinska researchers:
Their analysis reveals that the grammar of the genetic code is much more complex than that of even the most complex human languages. Instead of simply joining two words together by deleting a space, the individual words that are joined together in compound DNA words are altered, leading to a large number of completely new words.
On Adami's view, all this purpose, adaptation, information, signaling, and language originates in the random creation of an alphabet in the ferment around a volcano. Such a position is forced by the claims of naturalism, but is in no way compelled by evidence.
And it all happens whether there is consciousness or not. We experience consciousness, so we assume that other humans do. When we say, informally, that an animal is conscious, we mean that its behavior suggests that it is aware of its own needs, sensations, and environment, of self vs. not-self, of relationships with "not-selves," and such.
So can we determine accurately whether life forms are conscious? Brainless jellyfish, for example, are now thought to act with purpose when fishing.
Are the jellyfish conscious of that purpose? That would amount to having a mind without a brain. But the actual relationship between mind and brain is not -- as we shall see -- as straightforward as was once supposed. For one thing, there does not seem to be a "tree of intelligence," in the sense of a completely consistent correlation between size/type of brain and observed intelligence. We might be best to stick with observation for now, and defer classification till later.
Life forms communicate with each other to a degree that often surprises researchers. Prey animals, for example, warn predators of the danger of eating them or advise other prey that a hiding place is taken. But evidence suggests that plants can communicate too. The Scientist tells us:
Researchers are unearthing evidence that, far from being unresponsive and uncommunicative organisms, plants engage in regular conversation. In addition to warning neighbors of herbivore attacks, they alert each other to threatening pathogens and impending droughts, and even recognize kin, continually adapting to the information they receive from plants growing around them. Moreover, plants can "talk" in several different ways: via airborne chemicals, soluble compounds exchanged by roots and networks of threadlike fungi, and perhaps even ultrasonic sounds. Plants, it seems, have a social life that scientists are just beginning to understand.
So while communication, like purpose, is everywhere, the degree to which a life form is conscious of itself in communication with another life form is still elusive.
Naturalists who do not define the problem out of existence by insisting that even rocks have minds have generally adopted another approach. They try to find animal behaviors that are such close equivalents to human behavior that all such behaviors can be lumped together as, in Francis Crick's phrase, "nothing but a pack of neurons." Nothing remains but to provide a naturalist account, like Adami's, as to how they originally came to be a pack.
What the naturalists are doing is called anthropomorphism -- ascribing human qualities to life forms that may experience life very differently. It was once the province of folk tales. Not today. This year, we were told in the science press that bacteria have morals:
Far from being selfish organisms whose sole purpose is to maximize their own reproduction, bacteria in large communities work for the greater good by resolving a social conflict among individuals to enhance the survival of their entire community.
This finding supports group selection rather than the selfish gene. But it raises questions: Do bacteria have a mind in the absence of a brain? Or is group selection a purposeful force that can operate (as if by magic), in the absence of a mind or brain? Or is the concept of "morals" in fact specific to the human mind, in which case bacteria are not best described as acting that way even by analogy, lest the description become misleading?
In a similar vein, philosopher Stephen Cave argues that animals have free will, proposing to measure it by an FQ, a freedom quotient analogous to IQ:
Experimenters measure this ability by testing how long an animal can resist a small treat in return for a larger reward after a delay. Chickens, for example, can do this for six seconds. They can choose whether to wait for the juicier titbit or not -- but only if that titbit comes very soon. A chimpanzee, on the other hand, can wait for a cool two minutes -- or even up to eight minutes in some experiments. I am guessing that you could manage a lot longer.
Without knowing who the "you" is, I'm not betting anything. Not so sure I'd bet on an unknown chimp either, given the spread cited above.
The research sounds fascinating, but Cave then tells us that
... all around us, every day, we see a very natural kind of freedom -- one that is completely compatible with determinism. It is the kind that living things need to pursue their goals in a world that continually presents them with multiple possibilities.
Obviously, if "free will" is completely compatible with determinism (and Darwinism, he tells us, accounts for that) then free will as we traditionally understand it doesn't exist and can't be measured in any life form. So why claim it does, and can?
Cave's thesis about free will differs significantly from the observation above that jellyfish pursue fish with intent: The intentional behavior of jellyfish is observed; we simply don't know if they know their own intentions. Cave, by contrast, wants to account for human as well as animal intentions as entirely determined while appearing free -- in order to support a fully naturalist perspective.
These two approaches to gathering information are likely to come into increasing conflict. Is the purpose of gathering the information to find out what is going on or to provide support for naturalism? A naturalist will probably answer "Both!" A materialist assumes that any finding can be interpreted from his perspective, however bad the fit, and will keep tinkering until he find a somewhat better fit. The rest of us are prepared to look around and see there may be a better explanation.
Similarly, consider the recent research on why hive workers sometimes kill their queens:
"Workers are assessing the situation in their colony and deciding to revolt against the queen only when the genetic makeup of the colony makes it favorable to do so," Loope said. "The main advantage is to allow your sister workers to lay male eggs, rather than the queen, who typically stops worker reproduction by egg eating, attacking reproducing workers, and by laying many of her own eggs. By eliminating the queen, a matricidal worker allows other workers and herself to lay male eggs."
...
"Hence the matricide," Loope said. "Workers are not mindless automatons working for the queen no matter what. They only altruistically give up reproduction when the context is right, but revolt when it benefits them to do so."
These short quotations from a longer account convey only in part the detailed reasoning hypothesized for the insects. That raises an obvious question: Who or what exactly is doing the reasoning? Some would say natural selection. But natural selection -- the fact that some life forms survive and pass on their traits, while others don't -- must be one of two things. Either it is a mind suited to detailed calculations of self-interest. Or it has somehow produced in the insects' minds capable of such a feat without, so far as we can see, having the brains to match.
Darwin believed that natural selection was the acting agent:
... natural selection is daily and hourly scrutinizing, throughout the world, every variation, even the slightest; rejecting that which is bad, preserving and adding up all that is good; silently and insensibly working, wherever and whenever opportunity offers, at the improvement of each organic being in relation to its organic and inorganic conditions of life.
But Darwin wrote in an age when the behavior of life forms was not known to be this complex. He most likely had much simpler scenarios in view, squabbles over a kill perhaps.
Fortunately, we may not need to make sense of the current state of naturalism to gain at least some insights into animal mind. We humans have a sense of "self" that goes well beyond a drive to continue to exist. But to what extent do other life forms have this sense? Recent decades of research on apes and monkeys can give us some sense of the territory we are entering.
Next time, we'll discuss some of the hope, hype, and hard data about ape and monkey minds.
Subscribe to:
Posts (Atom)