Breaking Sticks
Winston Ewert December 5, 2015 5:30 AM
This is the fourth and final post in a series in which I have examined criticisms from Joe Felsenstein (University of Washington geneticist) and Tom English (Oklahoma City computer scientist) in response to two arguments for intelligent design: specified complexity and conservation of information. Look here for Parts 1, 2, and 3.
A large portion of post by Tom English, "The Law of Conservation of Information is defunct," is devoted to an example of conservation of information involving broken sticks. He argues that we can obtain active information without any bias in the initial process. Since active information is a measurement of bias in a system, this would mean that our use of active information was fundamentally broken.
English's process involves sticks that are broken into six pieces randomly. The sticks are six meters long (English does not give units, so I picked meters). The average length of a broken stick will be one meter. However the process is such that some pieces will end up much longer than other pieces.
English claims to compute the active information of these outcomes:
Suppose that the length of the 3rd segment of the broken stick is 2. Then the evolutionary process is biased in favor of outcome 3 by a factor of 2. ... These biases are what Dembski et al. refer to as active information.
This calculation is absolutely not active information. Active information is defined as being the logarithm of relative probabilities. English can be forgiven for skipping the logarithm, but using the observed length of a stick instead of a probability is nonsensical. If English wishes to criticize active information, he should actually follow the definitions of active information.
Let's consider another scenario, which may be what English is getting at. I have two coins. One is weighted so that 99 percent of the time it lands on heads. The other is weighted so that 99 percent of the time it lands on tails. If I pick a coin at random and flip it once, the result is actually fair despite the use of biased coins. Both heads and tails are equally likely. However, if I flip the coin 100 times, the sequence will be either mostly heads or mostly tails.
Suppose I observe the process of flipping 100 coins, and see that 99 are heads. Then following something like English's logic, I could argue that the probability of heads is 99 percent, and thus calculate positive active information. But this would be incorrect. Estimating the probabilities this way would only be valid for independent events. In this case, the events are all dependent due to sharing the same biased coins.
Let's compare the confused attempt to establish these probabilities with the logic I argued for in the case of birds:
Clearly, some configurations of matter are birds. However, almost all configurations of matter are not birds. If one were to pick randomly from all possible configurations of matter, the probability of obtaining a bird would be infinitesimally small. It is almost impossible to obtain a bird by random sampling uniformly from all configurations of matter.
Note that I am not saying the frequency of birds observed in biology can somehow represent an estimate of the probability of birds. It is in fact a reductio ad absurdum that implicitly invokes specified complexity. If birds were no more probable than any other configuration of matter, the total probability of birds would be miniscule. In the terminology of specified complexity, they would be highly complex. This in itself would not be a problem, but since birds constitute a specification, their presence gives us good reason to reject that the idea that birds are no more likely than any other configuration of matter. For birds to have arisen in the universe, they must be probable enough that they would no longer constitute a large amount of specified complexity. That's why I can argue that birds must be more probable than random chance.
English's attempt to dismantle conservation of information fails. It is based on a confused interpretation of how to compute active information. He does not compute probability, nor provide any justification for treating the length of a stick as a probability. Even if I consider a related scenario that has probabilities, anything like his technique is invalid for estimating probability. The method we use for arguing that the probability is higher is entirely different from the caricature presented by English.
Conclusion
English and Felsenstein have been engaged in knocking down straw men. Felsenstein attacks a version of specified complexity that Dembski never articulated. He misrepresents the actual idea promoted by Dembski as being pointlessly circular. Both critics misrepresent conservation of information as a simplistic argument that only intelligence can produce active information. They misrepresent us as claiming that Darwinian evolution is only as good as a random guess, despite the explicit published demonstration that repeated queries are a source of active information. English misrepresents our reasons for thinking that birds are more probable than a random configuration of matter. Their arguments are valid objections to these straw men, but our actual arguments lie elsewhere.
What, then, would be necessary to demonstrate that we are wrong? As I've argued, conservation of information shows that evolution requires a source of active information. We have not proven that such a source must be teleological. Nevertheless, we've argued that the sources present in available models of evolution are indeed teleological. Our argument would be refuted by the demonstration of a model with a source that is both non-teleological and provides sufficient active information to account for biological complexity.
Felsenstein hints at trying to do this when he talks about the weakness of long-range physics interactions. He thinks that by invoking these interactions he can obtain "quite possibly all" of the necessary active information to account for complexity in biology.
Some, quite possibly all, of Dembski and Marks's "active information" is present as soon as we have genotypes that have different fitnesses, and genotypes whose phenotypes are determined using the ordinary laws of physics.
However, when discussing the amount of active information that he can obtain from his assumption, he can only go as far as:
[T]he ordinary laws of physics, with their weakness of long-range interactions, lead to fitness surfaces much smoother than white-noise fitness surfaces.
Arguing that the weakness of long-range interactions produces sufficient active information to explain complexity in biology because it outperforms random search is like arguing that I can outrun a jet because I move much faster than a snail. However, if Felsenstein could demonstrate that the weakness of long-range physics interactions, or something equivalently non-teleological, could account for the active information in biology, it would dismantle the argument we have made. Furthermore, it would be a massive contribution to the fields of computational intelligence and evolutionary biology.
I cannot prove that Felsenstein cannot do this. I can point to past attempts, which all incorporated teleological decisions. They all show the effect of having been designed. My prediction is that you cannot build a working model of Darwinian evolution without invoking teleology. Felsenstein, English, or anyone else is invited to attempt to falsify my prediction.
Winston Ewert December 5, 2015 5:30 AM
This is the fourth and final post in a series in which I have examined criticisms from Joe Felsenstein (University of Washington geneticist) and Tom English (Oklahoma City computer scientist) in response to two arguments for intelligent design: specified complexity and conservation of information. Look here for Parts 1, 2, and 3.
A large portion of post by Tom English, "The Law of Conservation of Information is defunct," is devoted to an example of conservation of information involving broken sticks. He argues that we can obtain active information without any bias in the initial process. Since active information is a measurement of bias in a system, this would mean that our use of active information was fundamentally broken.
English's process involves sticks that are broken into six pieces randomly. The sticks are six meters long (English does not give units, so I picked meters). The average length of a broken stick will be one meter. However the process is such that some pieces will end up much longer than other pieces.
English claims to compute the active information of these outcomes:
Suppose that the length of the 3rd segment of the broken stick is 2. Then the evolutionary process is biased in favor of outcome 3 by a factor of 2. ... These biases are what Dembski et al. refer to as active information.
This calculation is absolutely not active information. Active information is defined as being the logarithm of relative probabilities. English can be forgiven for skipping the logarithm, but using the observed length of a stick instead of a probability is nonsensical. If English wishes to criticize active information, he should actually follow the definitions of active information.
Let's consider another scenario, which may be what English is getting at. I have two coins. One is weighted so that 99 percent of the time it lands on heads. The other is weighted so that 99 percent of the time it lands on tails. If I pick a coin at random and flip it once, the result is actually fair despite the use of biased coins. Both heads and tails are equally likely. However, if I flip the coin 100 times, the sequence will be either mostly heads or mostly tails.
Suppose I observe the process of flipping 100 coins, and see that 99 are heads. Then following something like English's logic, I could argue that the probability of heads is 99 percent, and thus calculate positive active information. But this would be incorrect. Estimating the probabilities this way would only be valid for independent events. In this case, the events are all dependent due to sharing the same biased coins.
Let's compare the confused attempt to establish these probabilities with the logic I argued for in the case of birds:
Clearly, some configurations of matter are birds. However, almost all configurations of matter are not birds. If one were to pick randomly from all possible configurations of matter, the probability of obtaining a bird would be infinitesimally small. It is almost impossible to obtain a bird by random sampling uniformly from all configurations of matter.
Note that I am not saying the frequency of birds observed in biology can somehow represent an estimate of the probability of birds. It is in fact a reductio ad absurdum that implicitly invokes specified complexity. If birds were no more probable than any other configuration of matter, the total probability of birds would be miniscule. In the terminology of specified complexity, they would be highly complex. This in itself would not be a problem, but since birds constitute a specification, their presence gives us good reason to reject that the idea that birds are no more likely than any other configuration of matter. For birds to have arisen in the universe, they must be probable enough that they would no longer constitute a large amount of specified complexity. That's why I can argue that birds must be more probable than random chance.
English's attempt to dismantle conservation of information fails. It is based on a confused interpretation of how to compute active information. He does not compute probability, nor provide any justification for treating the length of a stick as a probability. Even if I consider a related scenario that has probabilities, anything like his technique is invalid for estimating probability. The method we use for arguing that the probability is higher is entirely different from the caricature presented by English.
Conclusion
English and Felsenstein have been engaged in knocking down straw men. Felsenstein attacks a version of specified complexity that Dembski never articulated. He misrepresents the actual idea promoted by Dembski as being pointlessly circular. Both critics misrepresent conservation of information as a simplistic argument that only intelligence can produce active information. They misrepresent us as claiming that Darwinian evolution is only as good as a random guess, despite the explicit published demonstration that repeated queries are a source of active information. English misrepresents our reasons for thinking that birds are more probable than a random configuration of matter. Their arguments are valid objections to these straw men, but our actual arguments lie elsewhere.
What, then, would be necessary to demonstrate that we are wrong? As I've argued, conservation of information shows that evolution requires a source of active information. We have not proven that such a source must be teleological. Nevertheless, we've argued that the sources present in available models of evolution are indeed teleological. Our argument would be refuted by the demonstration of a model with a source that is both non-teleological and provides sufficient active information to account for biological complexity.
Felsenstein hints at trying to do this when he talks about the weakness of long-range physics interactions. He thinks that by invoking these interactions he can obtain "quite possibly all" of the necessary active information to account for complexity in biology.
Some, quite possibly all, of Dembski and Marks's "active information" is present as soon as we have genotypes that have different fitnesses, and genotypes whose phenotypes are determined using the ordinary laws of physics.
However, when discussing the amount of active information that he can obtain from his assumption, he can only go as far as:
[T]he ordinary laws of physics, with their weakness of long-range interactions, lead to fitness surfaces much smoother than white-noise fitness surfaces.
Arguing that the weakness of long-range interactions produces sufficient active information to explain complexity in biology because it outperforms random search is like arguing that I can outrun a jet because I move much faster than a snail. However, if Felsenstein could demonstrate that the weakness of long-range physics interactions, or something equivalently non-teleological, could account for the active information in biology, it would dismantle the argument we have made. Furthermore, it would be a massive contribution to the fields of computational intelligence and evolutionary biology.
I cannot prove that Felsenstein cannot do this. I can point to past attempts, which all incorporated teleological decisions. They all show the effect of having been designed. My prediction is that you cannot build a working model of Darwinian evolution without invoking teleology. Felsenstein, English, or anyone else is invited to attempt to falsify my prediction.
No comments:
Post a Comment