Search This Blog

Saturday, 9 July 2022

I.D and false positives.

 Applying the Design Filter to Hexagons

David Coppedge

Hexagons (at least macroscopic ones) are relatively rare in nature. The most common place we see them is in beehives. It could be argued that if bees are intelligently designed, for which there is ample independent evidence, then the structures they create are also intelligently designed. We might argue that hexagons are the most efficient packing spaces for the least amount of material. We might point out that the design also provides more robust protection against stress than square-shaped cells. We can see that the structural design performs a function.


Our propensity to infer design, though, has to face up to other examples of hexagons in the non-living world. Some have been difficult to explain by natural law.


Columnar Basalt

When lava cools, it often forms polygonal-shaped columns, and hexagons are the most common shape. Many physicists have tried to understand how this occurs. There have been partial solutions, but nothing fully satisfying. A paper in Physical Review Letters reproduces the hexagonal columns with a mathematical model. The basic idea is summarized in a news release at APS Physics, along with a stunning photo of a pyramid of hexagonal basalt columns at the Giant’s Causeway in Ireland. It sure looks designed. How do we make a proper inference?


The surface of cooling lava contracts more quickly than the still-warm liquid underneath, creating a stress that is relieved by the formation of cracks. Martin Hofmann from the Dresden University of Technology, Germany, and colleagues considered a uniform lava layer and calculated the energy released from different crack patterns. They found that, in the initial stages of cooling, when the cracks start to appear at random places on the surface, the energy release is greatest if the cracks intersect at 90-degree angles. Butas the lava continues to cool and shrink, and the cracks collectively start to penetrate into the bulk, more energy is released per crack if they intersect at 120-degree angles. This transition from individual to collective growth of the cracks drives the pattern from rectangular to hexagonal. The hexagonal pattern is then maintained as the lava cools further, eventually leading to an array of hexagonal columns, similar to those seen in nature. [Emphasis added.]


One can find columnar basalt in many locations: in the Grand Canyon, in Yellowstone Canyon, in Utah’s Zion National Park, in the Rocky Mountains, at Devil’s Postpile in the Sierra Nevada, and of course at the Giant’s Causeway, along with other places around the world. The uniformity of the columns can be impressive, but they are rarely perfect. Many times other polygons are mixed in with the hexagons. 


Saturn’s North Pole Hexagon

A giant hexagon made up of clouds has persisted for decades on Saturn’s north pole. This formation has baffled scientists since it was first discovered by the Voyager spacecraft in 1981. It appears to be unique in the Solar System, and it’s huge: 20,000 miles across and 60 miles deep. Saturn’s south pole also has a giant vortex, but not this polygonal shape. Space.com describes attempts to explain the feature:


Scientists have bandied about a number of explanations for the hexagon’s origin. For instance, water swirling inside a bucket can generate whirlpools possessing holes with geometric shapes. However, there is of course no giant bucket on Saturn holding this gargantuan hexagon.


Voyager and Cassini did identify many features of this strange hexagon that could help explain how it formed. For example, the points of the hexagon rotate around its center at almost exactly the same rate Saturn rotates on its axis. Moreover, a jet stream air current, much like the ones seen on Earth, flows eastward at up to about 220 mph (360 km/h) on Saturn, on a path that appears to follow the hexagon’s outline.


We know that standing waves can maintain nodes that are stationary with respect to their reference frame. Something like that appears to be at work in Saturn’s polar winds. The article says that the “bizarre giant hexagon on Saturn may finally be explained.” A model by a planetary scientist from New Mexico reproduces many of the observed properties of the hexagon.


The scientists ran computer simulations of an eastward jet flowing in a curving path near Saturn’s north pole. Small perturbations in the jet — the kind one might expect from jostling with other air currents — made it meander into a hexagonal shape. Moreover, this simulated hexagon spun around its center at speeds close to that of the real one.


The scenario that best fits Saturn’s hexagon involves shallow jets at the cloud level, study team members said. Winds below the cloud level apparently help keep the shape of the hexagon sharp and control the rate at which the hexagon drifts.


This hexagon may not be permanent, since it is subject to perturbations by processes that have no particular reason to maintain it. A simpler case is seen in Jupiter’s Great Red Spot that appears to be shrinking after three hundred years since it was first observed.


Tiny Non-Living Hexagons

Snowflakes are classic examples of orderly structures with a hexagonal shape. Other non-living hexagons include the ring structures of many organic molecules (at least the way they are diagrammed by chemists). Some minerals also display hexagonal packing. Most of us have seen soap bubbles form hexagonal interfaces when they are packed together. An occasional hexagon can be found in mud cracks on a dry creek bed.


Life-Produced Hexagons

Bees are not the only hexagon-makers in the living world. We find hexagons on tortoise shells and in the ommatidia of insects’ compound eyes. Some diatom species form free-standing hexagons in addition to the more common circles, triangles, squares, and pentagons. We humans, of course, are great hexagon-makers. Understanding their ideal packing geometry, we make them in telescope mirrors, geodesic domes, and soccer ball covers. Sometimes we create them just for their artistic value.


Proper Inferences

If humans create hexagons by intelligent design, is that true for other living things that make them? And how should we distinguish the design inference in life from the natural hexagons on Saturn or in columnar basalt? These questions provide an opportunity to understand William Dembski’s Design Filter.


It’s not enough that something be orderly. Casey Luskin has discussed columnar basalt, answering ID critics’ accusations that the Design Filter would generate a false positive. We’ve also explained why snowflakes do not pass the design filter, despite their elegance and beauty. It’s not enough, further, that something be rare or unique, like the Saturn hexagon. The Design Filter prefers a natural-law explanation if one can be found, or if the probability of the phenomenon’s occurrence by chance is sufficiently high.


But do we wait forever for a natural explanation? Planetary scientists struggled for 35 years to explain Saturn’s hexagon. Shouldn’t we wait to explain beehives and compound eyes without reference to intelligent design? Isn’t natural selection a natural law? (Actually, it’s more like magic than a law of nature, but we’ll entertain the question for the sake of argument.) 


Intelligent design is not a gaps argument. It’s a positive argument based on uniform experience. We have experience watching melting lava or drying mud forming geometric patterns. We have no other experience with hexagons forming on gas giants like Saturn, though. What do we do?


The Information Enigma

The short answer involves information. The hexagon on Saturn performs no function. Columnar basalt doesn’t say anything. Snowflakes don’t carry a message. They are mere emergent phenomena that are not that improbable, given laws of nature with which we are familiar. The Design Filter works properly by rejecting a design inference for these on the basis of probability and natural law. 


All the living examples of hexagons, by contrast, are produced by codes. Beeswax will not form into hexagon cells on its own, nor will silica arrange itself into the geometric shells of diatoms. A digital code made of DNA dictates the placement of ommatidia in the insect eye and patterns in the turtle shell. Each of these structures performs a function and is the outcome of processes directed by a code. 


The coded information makes use of natural laws, to be sure, but it arranges the parts into hexagons for a functional purpose. In our uniform experience, we know of one cause that can generate codes or instructions that lead to functional geometries — intelligence.


There is one sense, though, in which we could make a design inference for the nonliving hexagons like snowflakes, basalt columns, and planetary atmospheres. Certain features of the universe are so finely tuned that without them, water, atoms, stars, and planets would not exist. It takes a higher-order design to have a universe at all. 


You might even say that the elegant mathematics that allows us to describe hexagons is conceptual, not material, as are the aesthetic values that allow us to appreciate them. So even if the Design Filter rejects a design inference for some of the hexagons at one level, the mere existence of atoms, natural laws, and beauty warrants a design inference in a broader context for all of them. Without minds, we wouldn’t even be debating these questions.


This article was originally published in 2015.


Lamarck: Darwinism can't live with him, can't live without him.

 Darwin and the Ghost of Lamarck

Neil Thomas

Despite his amply documented religious ambivalences there are clear signs that Charles Darwin was never finally able to “close his account” with God. As late as 1870 he wrote to Joseph Hooker that he felt his theology was “in a muddle.” He found it difficult to conceive of the universe as having arisen by blind chance yet could perceive no evidence of consistently beneficent design. Three years later in a letter to a Dutch correspondent he wrote of the design/God issue as being “beyond the scope of man’s intellect” and just four years before his death proclaimed the problem “insoluble.”1


Religion and Biology in Conflict

Darwin’s spiritual life and biological work were so interdependent that — as he saw matters — if his theory of natural selection were once proved incontrovertible, this would entirely rule out the theory of any tutelary deity having overseen the development of life on earth.2 All would be the sole result of chance mutations and natural selection. He steadfastly refused the harmonizing, bridge-building entreaties offered him by Charles Lyell, Charles Kingsley, and others of his circle to the effect that natural selection could simply be understood as the operational modality which God had chosen to create His creatures.


His work and beliefs being so indissolubly linked, it is inevitable that the residue of religious faith that Darwin retained from his early years caused him serious pause when he began to assess the epistemological status of his biological work. For if life had really arisen by providential guidance, what price his strictly secular version of life’s evolution?  This was a circle it was clearly not easy to square and his fretting over the uncompromising binary could sometimes make him appear as a latter-day avatar of the ancient Greek philosopher Pyrrho (who doubted whether mankind had adequate grounds for claiming any knowledge with absolute certainty). Darwin certainly evidenced an ultra-pyrrhonist streak when he questioned whether his own reasoning, which in his opinion had descended from lowly and unreliable baboon ancestry, could be a dependable guide to truth at all. 


Given his inability to resolve fundamental conflicts it was perhaps inevitable that Darwin in his later decades even began to harbor doubts about the efficacy of his pièce de resistance, natural selection, with its (claimed) capacity to create the whole spectrum of the world’s life forms autonomously. Could such a positive and creative process, he asked himself, have been set in train by such a negative phenomenon as natural selection, an entity which Darwin, at the behest of many well-intentioned friends, eventually consented to revise downward to reconceptualize more clearly and in more modest and realistic terms as “natural preservation”?  The trouble in the latter case of course was that notions of naturalistic evolution would now seem to be less logically defensible since mere preservation, by definition, cannot at the same time be creative. 


Ascending Mount Improbable

Hence, following first publication of the Origin in November 1859, Darwin began casting around for supplementary theories to that of natural selection, even reverting to once firmly rejected evolutionary ideas.3 For now he was even prepared to reconsider the Lamarckian/Erasmian idea of the relative use/disuse of organs as a co-determinant of biological development. This is exemplified when in his Descent of Man (1871) he found himself caught up in the challenging position of trying to explain how an ape might have “transitioned” into a human being. For the intuitively obvious morphological link between ape and (wo)man becomes on closer inspection considerably less straightforward than it might superficially appear — something shown up very clearly in the different language competences of apes and humans.


How was Darwin’s particular “Mount Improbable” to be ascended and the decidedly “uphill” transition from ape to human explained? To establish a convincing evolution of ape to human it would first be necessary to establish that simians could over time have increased their communicative vocabularies so as to transform relatively inarticulate emotional cries into specific vocal symbols. But this in turn brings up the related problem of how to explain the rapid mental processing on which articulate speech depends. Without the simultaneous co-adaptation of the simian brain how could the facility of speech, which depends on the interdependent agency of the brain in tandem with the specialized organs of vocal articulation, have developed by the largely unguided processes of natural selection?  In other words, how could chance allied to natural selection have acquired the uncanny capacity to synchronize operations? There are clear signs that Darwin at length found this problem to be so intractable that he was forced back on what he had once denounced as the Lamarckian heresy in order to put together a tolerably coherent explanation.


In order to definitively prove the ape/(wo)man connection it would be necessary to point simultaneously to a precise morphological and neurological pathway of development. By contrast, the explanation Darwin advanced in The Descent of Man was, it must be noted, excessively speculative and ill-focused. It is particularly telling that he felt it necessary to appeal here to Lamarckian ideas in order to put together his rather flimsy conjecture. He writes in Descent,


The mental powers of some earlier progenitor of man must have been more highly developed than in any existing ape, before even the most imperfect form of speech could have come into use; but we may confidently believe that the continued use and advancement of this power would have reacted on the mind itself, by enabling and encouraging it to carry on long trains of thought.4 [Emphasis added.]


Passing quickly over the suspicious overuse of the conditional tense (compare the number of conditional “must haves” and “could haves” in the cited words) and that rather nervous, whistling-in-the-dark phrase “we may confidently believe,” it is his dependence on the supposedly discredited Lamarckian idea of the use/disuse of organs which is most conspicuous here since such a conception is not consistent with his original theory of natural selection. It appears that Darwin at this point was coming perilously close to that “apostasy from his own theory” for which Wallace was arraigned in the mid 1860s. At the very least, we sense that Darwin’s trumpet was giving forth a less certain sound in 1871 when we contrast it with the more confident but less thought-through development of his ideas in the period 1838-1859. His loss of confidence in his prior convictions may in turn have contributed to the fact that some later scientists too found it difficult to lay the ghost of Lamarck to rest, and this despite the fact that 20th-century advances in knowledge of Mendelian genetics appeared to rule out a Lamarckian evolutionary pathway.5


Re-enter Lamarck

The lure of Lamarck was exemplified most strikingly in the case of early 20th-century Viennese biologist Paul Kammerer and the unhappy affair of the “midwife toad.” The highly regarded Kammerer made the following astounding claim, which I cite here in the words of his modern biographer:


Kammerer took a type of toad that is one of the few amphibian species that mates on land and forced them to breed in water. As a result the males developed nuptial pads [= adhesive calluses] which are regularly found on other male toad species. These nuptial pads help the male toad grasp the slippery female when they copulate in water. Kammerer asserted that not only was he successful in inducing the development of nuptial pads but also that they were passed on to the next generation.6


Unfortunately, it was later revealed that the mating pads had been faked by the use of dark ink stains and a short time after this discovery Kammerer was moved to take his own life.


There is no doubting the attractiveness of the Lamarckian theory of acquired characteristics and their supposed heritability. It promotes the comforting notion that parents can pass down not only their wealth and property to their progeny but also the benign results of their own physical efforts at self-improvement.7 It is without doubt a more inspiring philosophy than is Darwinism; but such sentimental considerations were an extraneous issue to Kammerer who simply found himself unable to accept the postulation that “natural selection” possessed the efficacy claimed for it by its originator. Hence, exactly like the later Darwin himself, Kammerer was driven — apparently at whatever cost — to seek a Lamarckian supplement to prop up what he deemed the inadequate Darwinian theory. 


 A similar scenario arose in the case of  the late polymath Arthur Koestler who was also moved to flirt with Lamarckian ideas out of a dissatisfaction with Darwinism as a defensible evolutionary pathway.8 Koestler felt that Darwinian mechanisms could be at best only part of the picture, claiming that “there must be other principles and forces at work on the vast canvas of evolutionary phenomena.”9 He cited the veteran Ludwig von Bertalanffy on this point, Bertalanffy having been one of the distinguished contributors to the interdisciplinary conference of internationally renowned scientists and scholars organized by Koestler in Alpbach in 1969 entitled Beyond Reductionism:10


If differential reproduction and selective advantage are the only directive factors of evolution, it is hard to see why evolution has ever progressed beyond the rabbit, the herring or even the bacterium which are unsurpassed in their reproductive capacities.11


In light of this much-reported deficiency in the explanatory power of Darwinian theory, Lamarckism was the idea to which Koestler was drawn as at least one possible stratagem for plugging the Darwinian gap.


Theology and Biology

The Darwin who had once described his theological position as muddled became no less bewildered towards the end of his life, a fact which is illustrated by two letters he wrote in the final decade of his life. In a famous letter to his botanist friend Joseph Hooker of February 1, 1871 he wrote, 


It is often said that all the conditions for the first production of a living organism are now present, which could ever [= always] have been present — But if (and Oh! What a big if?) we could conceive in some warm little pond with all sorts and ammonia and phosphoric salts, — light, heat, electricity etc., present, that a protein compound was chemically formed, ready to undergo still more complex changes, at the present day such matter would be instantly devoured, or absorbed, which would not have been the case before living creatures were formed.12


Yet the same Darwin who to all appearances set so much store by his theory of naturalistic evolution — enthusing unguardedly about an origin of life from spontaneous generation — was nevertheless capable of writing in his 1876 Autobiography of 


the extreme difficulty or rather impossibility of conceiving this immense and wonderful universe, including man with his capacity of looking far backwards and far into futurity, as the result of blind chance or necessity. When thus reflecting, I feel compelled to look to a First Cause having an intelligent mind in some degree analogous to that of man; and I deserve to be called a Theist. 


It is clearly not possible to postulate at one and the same time that sentient beings are the result of divine creation and of a random chemical reaction, and this contradiction underscores Darwin’s abiding ambivalence on the subject of creation and evolution. These theological ambivalences also had a correlative in his work as a naturalist where his position remained essentially interrogative as he shifted in his mind between one theoretical possibility and another.


“To Be or Not to Be…”

It should be stressed that Darwin’s hesitancies and diffidences as a biologist were real and not the product of false modesty, as is evidenced in the no fewer than five amended editions of the Origin which followed in quick succession in the decade following the first edition of 1859. In the later, revised editions, he did his honest best to integrate criticisms made by others (to which he always remained acutely sensitive). There was in him little of the blinkered zealot which we tend to associate with some modern proponents of his theory. Darwin had always conceded that he was advancing his present theory until such time as a better one might present itself.  It seems that what at first blush might be mistaken for mere gentlemanly humility was in fact meant in earnest.


It is therefore likely that Darwin would have positively welcomed many modern findings as a means of complementing and enriching his own work, and the last few decades have in fact provided an intriguing addendum to the whole Darwin/Lamarck saga. The idea of heritability “beyond genes” is now regularly studied under the umbrella rubric of epigenetics;13 and although some results of this recent research have proved resistant to definitive interpretation, modern scientific advances have at the very least amply confirmed the worries of Darwin and the suspicions of Kammerer and Koestler that Darwinian explanations could not possibly represent the whole story. (See, for example, the conspectus of diverging modern views covered at length recently by Stephen Buranyi in the UK newspaper The Guardian, asking “Do We Need a New Theory of Evolution?”)


This is surely a factor with an important bearing on the interpretation of the Origin. Many of Darwin’s peers did not conceive of his work as a univocal tract but as a more nuanced discussion document to which some such as Kingsley, Lyell, and even Huxley (who was never able to assent to the proposition of natural selection) had no hesitation about producing their own “minority reports.” In my view it is in such a “dialogic” way that the Origin might most appropriately be read in our own day too — as a commendable effort to penetrate impenetrable mysteries but whose author might best be listened to, in Coleridge’s phrase, “with no presumption of inerrancy.”


Notes

See Nick Spencer, Darwin and God (London: SPCK), pp. 96-9.

Darwin wrote defiantly to Charles Lyell on this subject, “I would give absolutely nothing for theory of nat. selection, if it require miraculous additions at any one stage of descent.” (Charles Darwin to Charles Lyell, October 11, 1859, Darwin Correspondence Project, Letter no. 253, University of Cambridge, https://www.darwinproject.ac.uk/letter/DCP-LETT-2503.xml.)

In later life he appeared touchingly open to incorporating responses to a variety of criticisms leveled at him by other scientists, with the result that, over a decade, the Origin went into no less than five revised editions, its sixth, heavily emended version being markedly different in many respects from the 1859 original. 

The Descent of Man and Selection in Relation to Sex, edited by James Moore and Adrian Desmond (London: Penguin, 2004), p. 110.

A giraffe for instance cannot elongate its neck (and hence the necks of its progeny) by repeatedly craning towards the higher branches of trees, 

Kaus Taschwer, The Case of Paul Kammerer: The Most Controversial Biologist of His Time (Montreal: Bunim and Bannigan, 2019), p. 9.

See on this point Arthur Koestler, The Case of the Midwife Toad (London: Hutchinson, 1971), pp. 27-30.

Koestler cited with approval the view of the mid 20th-century scientist C. H. Waddington that chance mutation was like throwing bricks together in heaps in the hope that they would arrange themselves into an inhabitable house.

Koestler, Midwife Toad, p. 129.

Beyond Reductionism: The Alpbach Symposium, edited by Koestler and J. R. Smythies (London: Hutchinson, 1969). Cf. in that volume Ludwig von Bertalanffy’s “Chance or Law,” pp. 56-84, and for a balanced assessment of Koestler’s intellectual achievements and weaknesses Michael Scammell, Koestler: The Indispensable Intellectual (London; Faber and Faber, 2011). 

Koestler, Midwife Toad, p.129.

Darwin Correspondence Project, Letter no. 7471: https://www.darwinproject.ac.uk/letter/DCP-LETT-7471.xml

See John and Mary Gribbin, On the Origin of Evolution: Tracing Darwin’s Dangerous Idea from Aristotle to DNA (London: Collins, 2020), pp. 230-252 (chapter titled “The New Lamarckism”), Paul Davies’s recent chapter entitled “Darwinism 2.0” in his The Demon in the Machine: How Hidden Webs of Information Are Solving the Mystery of Life (London: Penguin, 2020), pp. 109-143, and Nessa Carey’s The Epigenetics Revolution: How Modern Biology is Rewriting our Understanding of Genetics, Disease and Inheritance (London: Icon, 2011).

yet more on why biology resembles technology.

 Irreducible Complexity in Ant Behavior Triggers a Recognition of Intelligent Design

David Klinghoffer

Here’s a very interesting episode of ID the Future, with host Eric Anderson and Animal Algorithms author Eric Cassell. The topic of conversation is the algorithm-driven foraging behavior of harvester ants. The design argument here is not from the algorithm alone but from the irreducibly complex combination of that with physical signs (cuticular hydrocarbons) exchanged by the ants and the sensors they use to interpret the signs. These must all work in concert, or the ants are out of luck. How did an unguided evolutionary process give each capacity to these social animals, each highly complex in itself but of no use unless present together? The evident foresight that guided the devising of this behavior is what triggers the recognition of intelligent design.


Cassell discusses recent research published in the Journal of the Royal Society Interface, “A feedback control principle common to several biological and engineered systems,” which draws an unapologetic parallel between human engineering and biological systems. As the authors write, “We hypothesize that theoretical frameworks from distributed computing may offer new ways to analyse adaptation behaviour of biology systems, and in return, biological strategies may inspire new algorithms for discrete-event feedback control in engineering.” In other words, biology and engineering can be mutually informative to each other. Cassell has written about this research, too, at Evolution News, and about similarly remarkable behaviors in Animal Algorithms. Download the podcast or listen to it here.

Tuesday, 5 July 2022

No free lunch re:information.

 Conservation of Information — The Theorems

William A. Dembski


I am reviewing Jason Rosenhouse’s new book, The Failures of Mathematical Anti-Evolutionism (Cambridge University Press), serially. For the full series so far, go here.


Until about 2007, conservation of information functioned more like a forensic tool for discovering and analyzing surreptitious insertions of information: So and so says they got information for nothing. Let’s see what they actually did. Oh yeah, here’s where they snuck in the information. Around 2007, however, a fundamental shift occurred in my work on conservation of information. Bob Marks and I began to collaborate in earnest, and then two very bright students of his also came on board. Initially we were analyzing some of the artificial life simulations that Jason Rosenhouse mentions in his book, as well as some other simulations (such as Thomas Schneider’s ev). As noted, we found that the information emerging from these systems was always more than adequately accounted for in terms of the information initially inputted. 


Yet around 2007, we started proving theorems that precisely tracked the information in these systems, laying out their information costs, in exact quantitative terms, and showing that the information problem always became quantitatively no better, and often worse, the further one backtracked causally to explain it. Conservation of information therefore doesn’t so much say that information is conserved as that at best it could be conserved and that the amount of information to be accounted for, when causally backtracked, may actually increase. This is in stark contrast to Darwinism, which attempts to explain complexity from simplicity rather than from equal or greater complexity. Essentially, then, conservation of information theorems argue for an information regress. This regress could then be interpreted in one of two ways: (1) the information was always there, front-loaded from the beginning; or (2) the information was put in, exogenously, by an intelligence. 


An Article of Faith

Rosenhouse feels the force of the first option. True, he dismisses conservation of information theorems as in the end “merely asking why the universe is as it is.” (p. 217) But when discussing artificial life, he admits, in line with the conservation of information theorems, that crucial information is not just in the algorithm but also in the environment. (p. 214) Yet if the crucial information for biological evolution (as opposed to artificial life evolution) is built into the environment, where exactly is it and how exactly is it structured? It does no good to say, as Rosenhouse does, that “natural selection serves as a conduit for transmitting environmental information into the genomes of organisms.” (p. 215) That’s simply an article of faith. Templeton Prize winner Holmes Rolston, who is not an ID guy, rejects this view outright. Writing on the genesis of information in his book Genes, Genesis, and God (pp. 352–353), he responded to the view that the information was always there:


The information (in DNA) is interlocked with an information producer-processor (the organism) that can transcribe, incarnate, metabolize, and reproduce it. All such information once upon a time did not exist but came into place; this is the locus of creativity. Nevertheless, on Earth, there is this result during evolutionary history. The result involves significant achievements in cybernetic creativity, essentially incremental gains in information that have been conserved and elaborated over evolutionary history. The know-how, so to speak, to make salt is already in the sodium and chlorine, but the know-how to make hemoglobin molecules and lemurs is not secretly coded in the carbon, hydrogen, and nitrogen…. 


So no, the information was not always there. And no, Darwinian evolution cannot, according to the conservation of information theorems, create information from scratch. The way out of this predicament for Darwinists (and I’ve seen this move repeatedly from them) is to say that conservation of information may characterize computer simulations of evolution, but that real-life evolution has some features not captured by the simulations. But if so, how can real-life evolution be subject to scientific theory if it resists all attempts to model it as a search? Conservation of information theorems are perfectly general, covering all search. 


Push Comes to Shove

Yet ironically, Rosenhouse is in no position to take this way out because, as noted in my last post in this series, he sees these computer programs as “not so much simulations of evolution [but as] instances of it.” (p. 209) Nonetheless, when push comes to shove, Rosenhouse has no choice, even at the cost of inconsistency, but to double down on natural selection as the key to creating biological information. The conservation of information theorems, however, show that natural selection, if it’s going to have any scientific basis, merely siphons from existing sources of information, and thus cannot ultimately explain it. 


As with specified complexity, in proving conservation of information theorems, we have taken a largely pre-theoretic notion and turned it into a full-fledged theoretic notion. In the idiom of Rosenhouse, we have moved the concept from track 1 to track 2. A reasonably extensive technical literature on conservation of information theorems now exists. Here are three seminal peer-reviewed articles addressing these theorems on which I’ve collaborated (for more, go here):


William A. Dembski and Robert J. Marks II “Conservation of Information in Search: Measuring the Cost of Success,” IEEE Transactions on Systems, Man and Cybernetics A, Systems & Humans, vol.5, #5, September 2009, pp.1051-1061

William A. Dembski and Robert J. Marks II, “The Search for a Search: Measuring the Information Cost of Higher Level Search,” Journal of Advanced Computational Intelligence and Intelligent Informatics, Vol.14, No.5, 2010, pp. 475-486.

William A. Dembski, Winston Ewert and Robert J. Marks II, “A General Theory of Information Cost Incurred by Successful Search,” Biological Information (Singapore: World Scientific, 2013), pp. 26-63.

A Conspiracy of Silence

Rosenhouse cites none of this literature. In this regard, he follows Wikipedia, whose subentry on conservation of information likewise fails to cite any of this literature. The most recent reference in that Wikipedia subentry is to a 2002 essay by Erik Tellgren, in which he claims that my work on conservation of information is “mathematically unsubstantiated.” That was well before any of the above theorems were ever proved. That’s like writing in the 1940s, when DNA’s role in heredity was unclear, that its role in heredity was “biologically unsubstantiated,” and leaving that statement in place even after the structure of DNA (by 1953) and the genetic code (by 1961) had been elucidated. It’s been two decades since Tellgren made this statement, and it remains in Wikipedia as the authoritative smackdown of conservation of information. 


At least it can be said of Rosenhouse’s criticism of conservation of information that it is more up to date than Wikipedia’s account of it. But Rosenhouse leaves the key literature in this area uncited and unexplained (and if he did cite it, I expect he would misexplain it). Proponents of intelligent design have grown accustomed to this conspiracy of silence, where anything that rigorously undermines Darwinism is firmly ignored (much like our contemporary media is selective in its reporting, focusing exclusively on the party line and sidestepping anything that doesn’t fit the desired narrative). Indeed, I challenge readers of this review to try to get the three above references inserted into this Wikipedia subentry. Good luck getting past the biased editors who control all Wikipedia entries related to intelligent design.


So, what is a conservation of information theorem? Readers of Rosenhouse’s book learn that such theorems exist. But Rosenhouse neither states nor summarizes these theorems. The only relevant theorems he recaps are the no free lunch theorems, which show that no algorithm outperforms any other algorithm when suitably averaged across various types of fitness landscapes. But conservation of information theorems are not no free lunch theorems. Conservation of information picks up where no free lunch leaves off. No free lunch says there’s no universally superior search algorithm. Thus, to the degree a search does well at some tasks, it does poorly at others. No free lunch in effect states that every search involves a zero-sum tradeoff. Conservation of information, by contrast, starts by admitting that for particular searches, some do better than others, and then asks what allows one search to do better than another. It answers that question in terms of active information. Conservation of information theorems characterize active information. 


A Bogus Notion?

To read Rosenhouse, you would think that active information is a bogus notion. But in fact, active information is a useful concept that all of us understand intuitively, even if we haven’t put a name to it. It arises in search. Search is a very general concept, and it encompasses evolution (Rosenhouse, recall, even characterized evolution in terms of “searching protein space”). Most interesting searches are needle-in-the-haystack problems. What this means is that there’s a baseline search that could in principle find the needle (e.g., exhaustive search or uniform random sampling), but that would be highly unlikely to find the needle in any realistic amount of time. What you need, then, is a better search, one that can find the needle with a higher probability so that it is likely, with the time and resources on hand, to actually find the needle. 


We all recognize active information. You’re on a large field. You know an easter egg is hidden somewhere in it. Your baseline search is hopeless — you stand no realistic chance of finding the Easter egg. But now someone tells you warm, cold, warm, warmer, hot, you’re burning up. That’s a better search, and it’s better because you are being given better information. Active information measures the amount of information that needs to be expended to improve on a baseline search to make it a better search. In this example, note that there are many possible directions that Easter egg hunters might receive in order to try to find the egg. Most such directions will not lead to finding the egg. Accordingly, if finding the egg is finding a needle in a haystack, so is finding the right directions among the different possible directions. Active information measures the information cost of finding the right directions.


Treasure Island

In the same vein, consider a search for treasure on an island. If the island is large and the treasure is well hidden, the baseline search may be hopeless — way too improbable to stand a reasonable chance of finding the treasure. But suppose you now get a treasure map where X marks the spot of the treasure. You’ve now got a better search. What was the informational cost of procuring that better search? Well, it involved sorting through all possible maps of the island and finding one that would identify the treasure location. But for every map where X marks the right spot, there are many where X marks the wrong spot. According to conservation of information, finding the right map faces an improbability no less, and possibly greater, than finding the treasure via the baseline search. Active information measures the relevant (im)probability


We’ve seen active information before in the Dawkins Weasel example. The baseline search for METHINKS IT IS LIKE A WEASEL stands no hope of success. It requires a completely random set of keystrokes typing all the right letters and spaces of this phrase without error in one fell swoop. But given a fitness function that assigns higher fitness to phrases where letters match the target phrase METHINKS IT IS LIKE A WEASEL, we’ve now got a better search, one that will converge to the target phrase quickly and with high probability. Most fitness functions, however, don’t take you anywhere near this target phrase. So how did Dawkins find the right fitness function to evolve to the target phrase? For that, he needed active information.


My colleagues and I have proved several conservation of information theorems, which come in different forms depending on the type and structure of information needed to render a search successful. Here’s the most important conservation of information theorem proved to date. It appears in the third article cited above (i.e., “A General Theory of Information Cost Incurred by Successful Search”):



Wrapping Up the Discussion

Even though the statement of this theorem is notation-heavy and will appear opaque to most readers, I give it nonetheless because, as unreadable as it may seem, it exhibits certain features that can be widely appreciated, thereby helping to wrap up this discussion of conservation of information, especially as it relates to Rosenhouse’s critique of the concept. Consider therefore the following three points:


The first thing to see in this theorem is that it is an actual mathematical theorem. It rises to Rosenhouse’s track 2. A peer-reviewed literature now surrounds the work. The theorem depends on advanced probability theory, measure theory, and functional analysis. The proof requires vector-valued integration. This is graduate-level real analysis. Rosenhouse does algebraic graph theory, so this is not his field, and he gives no indication of actually understanding these theorems. For him to forgo providing even the merest sketch of the mathematics underlying this work because “it would not further our agenda to do so” (p. 212–213) and for him to dismiss these theorems as “trivial musings” (p. 269) betrays an inability to grapple with the math and understand its implications, as much as it betrays his agenda to deep-six conservation of information irrespective of its merits.

The Greek letter mu denotes a null search and the Greek letter nu an alternative search. These are respectively the baseline search and the better search described earlier. Active information here, measured as log(r/q), measures the information required in a successful search for a search (usually abbreviated S4S), which is the information to find nu to replace mu. Searches can themselves be subject to search, and it’s these higher level searches that are at the heart of the conservation of information theorems. Another thing to note about mu and nu is that they don’t prejudice the types of searches or the probabilities that represent them. Mu and nu are represented as probability measures. But they can be any probability measures that assign at least as much probability to the target T as uniform probability (the assumption being that any search can at least match the performance of a uniform probability search — this seems totally reasonable). What this means is that conservation of information is not tied to uniform probability or equiprobability. Rosenhouse, by contrast, claims that all mathematical intelligent design arguments follow what he calls the Basic Argument from Improbability, which he abbreviates BAI (p. 126). BAI attributes to design proponents the most simple-minded assignment of probabilities (namely uniform probability or equiprobability). Conservation of information, like specified complexity, by contrast, attempts to come to terms with the probabilities as they actually are. This theorem, in its very statement, shows that it does not fall under Rosenhouse’s BAI. 

The search space Omega (Ω) in this example is finite. Its finiteness, however, in no way undercuts the generality of this theorem. All scientific work, insofar as it measures and gauges physical reality, will use finite numbers and finite spaces. The mathematical models used may involve infinities, but these can in practice always be approximated finitely. This means that these models belong to combinatorics. Rosenhouse, throughout his book, makes off that combinatorics is a dirty word, and that intelligent design, insofar as it looks to combinatorics, is focused on simplistic finite models and limits itself to uniform or equiprobabilities. But this is nonsense. Any object, mathematical or physical, consisting of finitely many parts related to each other in finitely many ways is a combinatorial object. Moreover, combinatorial objects don’t care what probability distributions are placed on them. Protein machines are combinatorial objects. Computer programs (and these include the artificial life simulations with which Rosenhouse is infatuated) are combinatorial objects. The bottom line is that it is no criticism at all of intelligent design to say that it makes extensive use of combinatorics. 

Next, “Closing Thoughts on Jason Rosenhouse.”


Editor’s note: This review is cross-posted with permission of the author from BillDembski.com.

What they think they know that Just ain't so?

 Darwinists’ Delusion: Closing Thoughts on Jason Rosenhouse

William A. Dembski

I have been reviewing Jason Rosenhouse’s new book, The Failures of Mathematical Anti-Evolutionism (Cambridge University Press), serially. This is the final post in the review. For the full series, go here.


Would the world be better off if Jason Rosenhouse had never written The Failures of Mathematical Anti-Evolutionism? I, for one, am happy he did write it. It shows what the current state of thinking is by Darwinists on the mathematical ideas that my colleagues and I in the intelligent design movement have developed over the years. In particular, it shows how little progress they’ve made in understanding and engaging with these ideas. It also alerted me to the resurgence of artificial life simulations. Not that artificial life ever went away. But Rosenhouse cites what is essentially a manifesto by 53 authors (including ID critics Christoph Adami, Robert Pennock, and Richard Lenski) that all is well with artificial life: “The Surprising Creativity of Digital Evolution.” (2020) In fact, conservation of information shows that artificial life is a hopeless enterprise. But as my colleague Jonathan Wells underscored in his book Zombie Science, some disreputable ideas are just too pleasing and comforting for Darwinists to disown, and artificial life is one of them. So it was helpful to learn from Rosenhouse about the coming zombie apocalypse.


Selective Criticism

As indicated at the start of this review, I’ve been selective in my criticisms of Rosenhouse’s book, focusing especially on where he addressed my work and on where it impinged on that of some of my close colleagues in the intelligent design movement. I could easily have found more to criticize, but this review is already long. Leaving aside his treatment of young-earth creationists and the Second Law of Thermodynamics, he reflexively repeats Darwinian chestnuts, such as that gene duplication increases information, as though a mere increase in storage capacity can explain biologically useful information (“We’ve doubled the size of your hard drive and you now have twice the information!”). And wherever possible, he tries to paint my colleagues as rubes and ignoramuses. Thus he portrays Stephen Meyer as assuming a simplistic probabilistic model of genetic change when in the original source (Darwin’s Doubt) he is clearly citing an older understanding (by the Wistar mathematicians back in the 1960s) and then makes clear that a newer, more powerful understanding is available today. Disinformation is a word in vogue these days, and it characterizes much of Rosenhouse’s book.


In closing, I want to consider an example that appears near the start of The Failures of Mathematical Anti-Evolutionism (p. 32) and reappears at the very end in the “Coda” (pp. 273–274). It’s typical, when driving on a major street, to have cross streets where one side of the cross street is directly across from the other, and so the traffic on the cross street across the major street is direct. Yet it can happen, more often on country roads, that the cross street involves what seem to be two T-intersections that are close together, and so crossing the major street to stay on the cross street requires a jog in the traffic pattern. 


Rosenhouse is offering a metaphor here, with the first option representing intelligent design, the second Darwinism. According to him, the straight path across the major street represents “a sensible arrangement of roads of the sort a civil engineer would devise” whereas the joggy path represents “an absurd and potentially dangerous arrangement that only makes sense when you understand the historical events leading up to it.” (p. 32) Historical contingencies unguided by intelligence, in which roads are built without coordination, thus explain the second arrangement, and by implication explain biological adaptation.


Rosenhouse grew up near some roads that followed the second arrangement. Recently he learned that in place of two close-by T-intersections, the cross street now goes straight across. He writes:


Apparently, in the years since I left home, that intersection has been completely redesigned. The power that be got tired of cleaning up after the numerous crashes and human misery resulting from the poor design of the roads. So they shut it all down for several months and completely redid the whole thing. Now the arrangement of roads makes perfect sense, and the number of crashes there has declined dramatically. The anti-evolutionists are right about one thing: we really can distinguish systems that were designed from those that evolved gradually. Unfortunately for them, the anatomy of organisms points overwhelmingly toward evolution and just as overwhelmingly from design. (p. 273–274)


A Failed Metaphor

The blindness on display in this passage is staggering, putting on full display the delusional world of Darwinists and contrasting it with the real world that is chock-full of design. Does it really need to be pointed out that roads are designed? That where they go is designed? And that even badly laid out roads are nonetheless laid out by design? But as my colleague Winston Ewert pointed out to me, Rosenhouse’s story doesn’t add up even if we ignore the design that’s everywhere. On page 32, he explains that the highway was built first and then later towns arose on either side of the highway, eventually connecting the crossroads to the highway. But isn’t it obvious, upon the merest reflection, that whoever connected the second road to the highway could have built it opposite to the first road that was already there. So why didn’t they do it? The historical timing of the construction of the roads doesn’t explain it. Something else must be going on.


There are in fact numerous such intersections in the US. Typically they are caused by grid corrections due to the earth’s curvature. In other words, they are a consequence of fitting a square grid onto a spherical earth. Further, such intersections can actually be safer, as a report on staggered junctions by the European Road Safety Decision Support System makes clear. So yes, this example is a metaphor, but not for the power of historical contingency to undercut intelligent design, but for the delusive power of Darwinism to look to historical contingency for explanations that support Darwinism but that under even the barest scrutiny fall apart. 


Enough said. Stay tuned for the second edition of The Design Inference!


Editor’s note: This review is cross-posted with permission of the author from BillDembski.com.


Saturday, 2 July 2022

Trouble comes in pairs for Darwinism?

 Günter Bechly: Species Pairs Wreck Darwinism

Evolution News @DiscoveryCSC

On a new episode of ID the Future, distinguished German paleontologist Günter Bechly continues a discussion of his new argument against modern evolutionary theory. According to Bechly, contemporary species pairs diverge hardly at all over millions of years, even when isolated from each other, and yet we’re supposed to believe that the evolutionary process built dramatically distinct body plans in similar time frames at various other times in the history of life. Why believe that? He suggests this pattern of relative stasis among species pairs strikes a significant and damaging blow to Darwinian theory.


In this Part 2 episode, Bechly and host Casey Luskin discuss mice/rat pairs, cattle and bison, horses and donkeys, Asian and African elephants, the Asian black bear and the South American spectacled bear, river hippos and West African pygmy hippos, the common dolphin and the bottle-nosed dolphin, and the one outlier in this pattern, chimpanzees and humans. If chimps and humans really did evolve from a common ancestor, why do they appear to be the lone exception to this pattern of modern species pairs differing in only trivial ways? Bechly notes that whatever one’s explanation, there appears to be clear evidence here of human exceptionalism. He and Luskin go on to cast doubt on the idea that mindless evolutionary processes could have engineered the suite of changes necessary to convert an ape ancestor into upright walking, talking, technology-fashioning human beings.


What about Hawaiian silversword plants? They seem to have evolved into dramatically different body plans in the past few million years. Are these an exception to Bechly’s claimed pattern of species pair stasis? After all, the differences among silverswords can be quite dramatic, with differences far more extensive than what we find between, say, Asian and African elephants or horse and donkey. Drawing on a second article on the topic, he notes that some extant species of plants possess considerable phenotypic plasticity. They have the capacity to change quite dramatically and still breed with other very different varieties. This appears to be the case with silverswords. There is more to his argument. Tune in to hear Dr. Bechly respond to additional objections that Dr. Luskin raises.  Download the podcast or listen to it here. Part 1 of their conversation is here.


Washing their dirty linen in public?

 Donate Darwinism for a Tax Credit? Evolutionists Admit Their Field’s Failures

David Klinghoffer

An article in The Guardian by science journalist Stephen Buryani represents something remarkable in the way the public processes the failures of evolutionary theory. In the past, those failures have been admitted by some biologists…but always in settings (technical journals, conferences) where they thought nobody outside their professional circles was listening. It’s like if a married couple were going through rough times in their relationship. They’d discuss it between themselves, with close friends, maybe with a counselor. But for goodness sake they wouldn’t put it on Facebook, where all marriages are blessed exclusively with good cheer and good fortune. 


Scandalous Admissions

Well, the field of evolutionary biology has just done the equivalent of a massive Facebook dump, admitting that Jim and Sandy, who always seemed so happy, are in fact perilously perched on the rocks. In a very long article, top names in the field share with Buryani what intelligent design proponents already knew, but few Guardian readers guessed. The headline from the left-leaning British daily asks, “Do we need a new theory of evolution?” Answer in one word: yes. The article is full of scandalous admissions: 


Strange as it sounds, scientists still do not know the answers to some of the most basic questions about how life on Earth evolved. Take eyes, for instance. Where do they come from, exactly? The usual explanation of how we got these stupendously complex organs rests upon the theory of natural selection….


This is the basic story of evolution, as recounted in countless textbooks and pop-science bestsellers. The problem, according to a growing number of scientists, is that it is absurdly crude and misleading.


For one thing, it starts midway through the story, taking for granted the existence of light-sensitive cells, lenses and irises, without explaining where they came from in the first place. Nor does it adequately explain how such delicate and easily disrupted components meshed together to form a single organ. And it isn’t just eyes that the traditional theory struggles with. “The first eye, the first wing, the first placenta. How they emerge. Explaining these is the foundational motivation of evolutionary biology,” says Armin Moczek, a biologist at Indiana University. “And yet, we still do not have a good answer. This classic idea of gradual change, one happy accident at a time, has so far fallen flat.”


There are certain core evolutionary principles that no scientist seriously questions. Everyone agrees that natural selection plays a role, as does mutation and random chance. But how exactly these processes interact — and whether other forces might also be at work — has become the subject of bitter dispute. “If we cannot explain things with the tools we have right now,” the Yale University biologist Günter Wagner told me, “we must find new ways of explaining.”…


[T]his is a battle of ideas over the fate of one of the grand theories that shaped the modern age. But it is also a struggle for professional recognition and status, about who gets to decide what is core and what is peripheral to the discipline. “The issue at stake,” says Arlin Stoltzfus, an evolutionary theorist at the IBBR research institute in Maryland, “is who is going to write the grand narrative of biology.” And underneath all this lurks another, deeper question: whether the idea of a grand story of biology is a fairytale we need to finally give up. [Emphasis added.]


“Absurdly crude and misleading”? A “classic idea” that “has so far fallen flat”? “A fairytale we need to finally give up”? Scientists locked in a desperate struggle for “professional recognition and status”? What about for the truth? This is how writers for Evolution News have characterized the troubles with Darwinian theory. But I didn’t expect to see it in The Guardian.


A Familiar Narrative

Buryani runs through a familiar narrative: the modern synthesis, the challenge from the Extended Evolutionary Synthesis, the 2016 “New Trends in Evolutionary Biology” meeting at the Royal Society (which was covered here extensively), how some evolutionists condemned the conference while other embraced its revisionist messaging, efforts to prop up unguided evolution with exotic ideas of “plasticity, evolutionary development, epigenetics, cultural evolution,” etc. 


If you’ve ever owned an automobile toward the end of its life, the situation will be familiar: the multiple problems all at once, the multiple attempted fixes, the expense, the trouble, the worry about the car breaking dying at any inconvenient or dangerous moment (like in the middle of the freeway), all of which together signal that it’s time not to sell the car (who would want it?) but to have it towed off and donated to charity for a tax credit.


Buryani doesn’t mention the intelligent design theorists in attendance at the Royal Society meeting — Stephen Meyer, Günter Bechly, Douglas Axes, Paul Nelson, and others. He doesn’t mention the challenge from intelligent design at all. That’s okay. I didn’t expect him to do so. Anyway, readers of Evolution News will already be familiar with most everything Buryani reports.


Despairing Statements

He concludes with seemingly despairing statements from evolutionists along the lines of, “Oh, we never needed a grand, coherent theory like that, after all.”


Over the past decade the influential biochemist Ford Doolittle has published essays rubbishing the idea that the life sciences need codification. “We don’t need no friggin’ new synthesis. We didn’t even really need the old synthesis,” he told me….


The computational biologist Eugene Koonin thinks people should get used to theories not fitting together. Unification is a mirage. “In my view there is no — can be no — single theory of evolution,” he told me.


I see. Evolutionists have, until now, been very, very reluctant to admit such things in the popular media. Always, the obligation was heeded to present an illusory picture of wedded bliss to the unwashed, which, if given some idea of the truth, would draw its own conclusions and maybe even take up with total heresies like intelligent design. Now that illusion of blessed domesticity has been cast aside in a most dramatic fashion. Read the rest of Buryani’s article. Your eyebrows will go up numerous times.


Who is this one Father?

 Have we not all one father? hath not one God created us? why do we deal treacherously every man against his brother, profaning the covenant of our fathers? 

Note please that this ONLY Father is also our ONLY God.

Thus failure to properly identify this one Father is grounds for disqualification from divine favor.

And this is life eternal, that they might know thee the ONLY TRUE GOD, and Jesus Christ, whom THOU hast sent.

Wednesday, 29 June 2022

Nothing in biology is as complex as Darwinism's relationship with the truth?

 Jason Rosenhouse and Specified Complexity

William A. Dembski

I am reviewing Jason Rosenhouse’s new book, The Failures of Mathematical Anti-Evolutionism (Cambridge University Press), serially. For the full series so far, go here.


The method for inferring design laid out in my book The Design Inference amounts to determining whether an event, object, or structure exhibits specified complexity or, equivalently, specified improbability. The term specified complexity does not actually appear in The Design Inference, where the focus is on specified improbability. Specified improbability identifies things that are improbable but also suitably patterned, or specified. Specified complexity and specified improbability are the same notion. 


To see the connection between the two terms, imagine tossing a fair coin. If you toss it thirty times, you’ll witness an event of probability 1 in 2^30, or roughly 1 in a billion. At the same time, if you record those coin tosses as bits (0 for tails, 1 for heads), that will require 30 bits. The improbability of 1 in 2^30 thus corresponds precisely to the number of bits required to identify the event. The greater the improbability, the greater the complexity. Specification then refers to the right sort of pattern that, in the presence of improbability, eliminates chance. 


An Arrow Shot at a Target

Not all patterns eliminate chance in the presence of improbability. Take an arrow shot at a target. Let’s say the target has a bullseye. If the target is fixed and the arrow is shot at it, and if the bullseye is sufficiently small so that hitting it with the arrow is extremely improbable, then chance may rightly be eliminated as an explanation for the arrow hitting the bullseye. On the other hand, if the arrow is shot at a large wall, where the probability of hitting the wall is large, and the target is then painted around the arrow sticking in the wall so that the arrow is squarely in the bullseye, then no conclusion about whether the arrow was or was not shot by chance is possible. 


Specified improbability, or specified complexity, calls on a number of interrelated concepts. Besides a way of calculating or estimating probability and a criterion for determining whether a pattern is indeed a specification, the notion requires factoring in the number of relevant events that could occur, or what are called probabilistic resources. For example, multiple arrows allowing multiple shots will make it easier to hit the bullseye by chance. Moreover, the notion requires having a coherent rationale for determining what probability bounds may legitimately be counted as small enough to eliminate chance. Also, there’s the question of factoring in other specifications that may compete with the one originally identified, such as having two fixed targets on a wall and trying to determine whether chance could be ruled out if either of them were hit with an arrow. 


The basic theory for explaining how specified improbability/complexity is appropriately used to infer design was laid out in The Design Inference, and then refined (in some ways simplified, in some ways extended) over time. The notion was well vetted. It was the basis for my doctoral dissertation in the philosophy of science and the foundations of probability theory — this dissertation was turned into The Design Inference. I did this work in philosophy after I had already done a doctoral dissertation in mathematics focusing on probability and chaos theory (Leo Kadanoff and Patrick Billingsley were the advisors on that dissertation). 


The manuscript for The Design inference went past a stringent review with academic editors at Cambridge University Press, headed by Brian Skyrms, a philosopher of probability at UC Irvine, and one of the few philosophers to be in the National Academy of Sciences. When I was a postdoc at Notre Dame in 1996–97, the philosopher Phil Quinn revealed to me that he had been a reviewer, giving Cambridge an enthusiastic thumbs up. He also told me that he had especially liked The Design Inference’s treatment of complexity theory (chapter four in the book). 


But There’s More

With my colleagues Winston Ewert and Robert Marks, we’ve given specified complexity a rigorous formulation in terms of Kolmogorov complexity/algorithmic information theory:


Winston Ewert, William Dembski, and Robert J. Marks II (2014). “Algorithmic Specified Complexity.” In J. Bartlett, D. Hemser, J. Hall, eds., Engineering and the Ultimate: An Interdisciplinary Investigation of Order and Design in Nature and Craft (Broken Arrow, Okla.: Blyth Institute Press).

Ewert, W., Dembski, W., & Marks, R. J. (2015). “Algorithmic Specified Complexity in the Game of Life.” IEEE Transactions on Systems, Man, and Cybernetics: Systems, 45(4), 584–594.

True to form, critics of the concept refuse to acknowledge that specified complexity is a legitimate well-defined concept. Go to the Wikipedia entry on specified complexity, and you’ll find the notion dismissed as utterly bogus. Publications on specified complexity by colleagues and me, like those just listed, are ignored and left uncited. Rosenhouse is complicit in such efforts to discredit specified complexity. 


But consider, scientists must calculate, or at least estimate, probability all the time, and that’s true even of evolutionary biologists. For instance, John Maynard Smith, back in his 1958 The Theory of Evolution, concludes that flatworms, annelids, and molluscs, representing three different phyla, must nonetheless descend from a common ancestor because their common cleavage pattern in early development “seems unlikely to have arisen independently more than once.” (Smith, pp. 265–266) “Unlikely” is, of course, a synonym for “improbable.” 


Improbability by itself, however, is not enough. The events to which we assign probabilities need to be identified, and that means they must match identifiable patterns (in the Smith example, it’s the common cleavage pattern that he identified). Events exhibiting no identifiable pattern are events over which we can exercise no scientific insight and about which we can draw no scientific conclusion.


Hung Up on Specification

Even so, Rosenhouse seems especially hung up on my notion of specification, which he mistakenly defines as “independently describable” (p. 133) or “describable without any reference to the object itself” (p. 141). But nowhere does he give the actual definition of specification. To motivate our understanding of specification, I’ve used such language as “independently given” or “independently identifiable.” But these are intuitive ways of setting out the concept. Specification has a precise technical definition, of which Rosenhouse seems oblivious.


In The Design Inference, I characterized specification precisely in terms of a complexity measure that “estimates the difficulty of formulating patterns.” This measure then needs to work in tandem with a complexity bound that “fixes the level of complexity at which formulating such patterns is feasible.” (TDI, p. 144) That was in 1998. By 2005, this core idea stayed unchanged, but I preferred to use the language of descriptive complexity and minimum description length to characterize specification (see my 2005 article on Specification, published in Philosophia Christi, which Rosenhouse cites but without, again, giving the actual definition of the term specification). 


Two Notions of Complexity

So, what’s the upshot of specification according to this definition? Essentially, specified complexity or specified improbability involves two notions of complexity, one probabilistic, the other linguistic or descriptive. Thus we can speak of probabilistic complexity and descriptive complexity. Events become probabilistically more complex as they become more improbable (this is consistent with, as pointed out earlier, longer, more improbable sequences of coin tosses requiring longer bit strings to be recorded). At the same time, descriptive complexity characterizes patterns that describe events via a descriptive language. Descriptive complexity differs from probabilistic complexity and denotes the shortest description that will describe an event. The specification in specified complexity thus refers to patterns with short descriptions, and specified complexity refers to events that have high probabilistic complexity but whose identifying patterns have low descriptive complexity. 


To appreciate how probabilistic and descriptive complexity play off each other in specified complexity, consider the following example from poker. Take the hands corresponding to “royal flush” and “any hand.” These descriptions are roughly the same length and very short. Yet “royal flush” refers to 4 hands among 2,598,960 total number of poker hands and thus describes an event of probability 4/2,598,960 = 1/649,740. “Any hand,” by contrast, allows for any of the total number of 2,598,960 poker hands, and thus describes an event of probability 1. Clearly, if we witnessed a royal flush, we’d be inclined, on the basis of its short description and the low probability event to which it corresponds, to refuse to attribute it to chance. Now granted, with all the poker that’s played worldwide, the probability of 1/649,740 is not small enough to decisively rule out its chance occurrence (in the history of poker, royal flushes have appeared by chance). But certainly we’d be less inclined to ascribe a royal flush to chance than we would any hand at all.


The general principle illustrated in this example is that large probabilistic complexity (or low probability) and small descriptive complexity combine to yield specified complexity. Specifications are then those patterns that have small descriptive complexity. Note that it can be computationally intractable to calculate minimum description length exactly, but that often we can produce an effective estimate for it by finding a short description, which, by definition, will then constitute an upper bound for the absolute minimum. As it is, actual measures of specified complexity take the form of a negative logarithm applied to the product of a descriptive complexity measure times a probability. Because a negative logarithm makes small things big and big things small, high specified complexity corresponds to small probability multiplied with small descriptive complexity. This is how I find it easiest to keep straight how to measure specified complexity. 


Rosenhouse, however, gives no evidence of grasping specification or specified complexity in his book (pp. 137–146). For instance, he will reject that the flagellum is specified, claiming that it is not “describable without any reference to the object itself,” as though that were the definition of specification. (See also p. 161.) Ultimately, it’s not a question of independent describability, but of short or low-complexity describability. I happen to think that the description “bidirectional motor-driven propeller” is an independent way of describing the flagellum because humans invented bidirectional motor-driven propellers before they found them, in the form of flagella, on the backs of E. coli and other bacteria (if something has been independently identified, then it is independently identifiable). But what specifies it is that it has a short description, not that the description could or could not be identified independently of the flagellum. By contrast, a random assortment of the protein subunits that make up the flagellum would be much harder to describe. The random assortment would therefore require a much longer description, and would thus not be specified. 


The Science Literature

The mathematical, linguistic, and computer science literature is replete with complexity measures that use description length, although the specific terminology to characterize such measures varies with field of inquiry. For instance, the abbreviation MDL, or minimum description length, has wide currency; it arises in information theory and merits its own Wikipedia entry. Likewise AIT, or algorithmic information theory, has wide currency, where the focus is on compressibility of computer programs, so that highly compressible programs are the ones with shorter descriptions. In any case, specification and specified complexity are well defined mathematical notions. Moreover, the case for specified complexity strongly implicating design when probabilistic complexity is high and descriptive complexity is low is solid. I’m happy to dispute these ideas with anyone. But in such a dispute, it will have to be these actual ideas that are under dispute. Rosenhouse, by contrast, is unengaged with these actual ideas, attributing to me a design inferential apparatus that I do not recognize, and then offering a refutation of it that is misleading and irrelevant. 


As a practical matter, it’s worth noting that most Darwinian thinkers, when confronted with the claim that various biological systems exhibit specified complexity, don’t challenge that the systems in question (like the flagellum) are specified (Dawkins in The Blind Watchmaker, for instance, never challenges specification). In fact, they are typically happy to grant that these systems are specified. The reason they give for not feeling the force of specified complexity in triggering a design inference is that, as far as they’re concerned, the probabilities aren’t small enough. And that’s because natural selection is supposed to wash away any nagging improbabilities. 


A Coin-Tossing Analogy

In a companion essay to his book for Skeptical Inquirer, Rosenhouse offers the following coin-tossing analogy to illustrate the power of Darwinian processes in overcoming apparent improbabilities:


[Creationists argue that] genes and proteins evolve through a process analogous to tossing a coin multiple times. This is untrue because there is nothing analogous to natural selection when you are tossing coins. Natural selection is a non-random process, and this fundamentally affects the probability of evolving a particular gene. To see why, suppose we toss 100 coins in the hopes of obtaining 100 heads. One approach is to throw all 100 coins at once, repeatedly, until all 100 happen to land heads at the same time. Of course, this is exceedingly unlikely to occur. An alternative approach is to flip all 100 coins, leave the ones that landed heads as they are, and then toss again only those that landed tails. We continue in this manner until all 100 coins show heads, which, under this procedure, will happen before too long. 


The latter approach to coin tossing, which retosses only the coins that landed tails, corresponds, for Rosenhouse, to Darwinian natural selection making probable for evolution what at first blush might seem improbable. Of course, the real issue here is to form reliable estimates of what the actual probabilities are even when natural selection is thrown into the mix. The work of Mike Behe and Doug Axe argues that for some biological systems (such as molecular machines and individual enzymes), natural selection does nothing to mitigate what, without it, are vast improbabilities. Some improbabilities remain extreme despite natural selection. 


One final note before leaving specification and specified complexity. Rosenhouse suggests that in defining specified complexity as I did, I took a pre-theoretic notion as developed by origin-of-life researcher Leslie Orgel, Paul Davies, and others, and then “claim[ed] to have developed a mathematically rigorous form of the concept.” In other words, he suggests that I took a track 1 notion and claimed to turn it into a track 2 notion. Most of the time, Rosenhouse gives the impression that moving mathematical ideas from track 1 to track 2 is a good thing. But not in this case. Instead, Rosenhouse faults me for claiming that “this work constitutes a genuine contribution to science, and that [ID proponents] can use [this] work to prove that organisms are the result of intelligent design.” For Rosenhouse, “It is these claims that are problematic, to put it politely, for reasons we have already discussed.” (p. 161) 


The irony here is rich. Politeness aside, Rosenhouse’s critique of specified complexity is off the mark because he has mischaracterized its central concept, namely, specification. But what makes this passage particularly cringeworthy is that Leslie Orgel, Paul Davies, Francis Crick, and Richard Dawkins have all enthusiastically endorsed specified complexity, in one form or another, sometimes using the very term, at other times using the terms complexity and specification (or specificity) in the same breath. All of them have stressed the centrality of this concept for biology and, in particular, for understanding biological origins. 


Yet according to Rosenhouse, “These authors were all using ‘specified complexity’ in a track one sense. As a casual saying that living things are not just complex, but also embody independently-specifiable patterns, there is nothing wrong with the concept.” (p. 161) But in fact, there’s plenty wrong if this concept must forever remain merely at a pre-theoretic, or track 1, level. That’s because those who introduced the term “specified complexity” imply that the underlying concept can do a lot of heavy lifting in biology, getting at the heart of biological innovation and origins. So, if specified complexity stays forcibly confined to a pre-theoretic, or track 1, level, it becomes a stillborn concept — suggestive but ultimately fruitless. Yet given its apparent importance, the concept calls for a theoretic, or track 2, level of meaning and development. According to Rosenhouse, however, track 2 has no place for the concept. What a bizarre, unscientific attitude. 


Consider Davies from The Fifth Miracle (1999, p. 112): “Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity.” Or consider Richard Dawkins in The Blind Watchmaker (1986, pp. 15–16): “We were looking for a precise way to express what we mean when we refer to something as complicated. We were trying to put a finger on what it is that humans and moles and earthworms and airliners and watches have in common with each other, but not with blancmange, or Mont Blanc, or the moon. The answer we have arrived at is that complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by random chance alone.” How can any scientist who takes such remarks seriously be content to leave specified complexity at a track 1 level?


You’re Welcome, Rosenhouse

Frankly, Rosenhouse should thank me for taking specified complexity from a track 1 concept and putting it on solid footing as a track 2 concept, clarifying what was vague and fuzzy in the pronouncements of Orgel and others about specified complexity, thereby empowering specified complexity to become a precise tool for scientific inquiry. But I suspect in waiting for such thanks, I would be waiting for the occurrence of a very small probability event. And who in their right mind does that? Well, Darwinists for one. But I’m not a Darwinist.


Next, “Evolution With and Without Multiple Simultaneous Changes.’”


Editor’s note: This review is cross-posted with permission of the author from BillDembski.com.

Yet more on what 'unbelievers' need to believe.

 More on Self-Replicating Machines

Granville Sewell


In a post earlier this month, I outlined “Three Realities Chance Can’t Explain That Intelligent Design Can.” The post showed some of the problems with materialist explanations for how the four fundamental, unintelligent forces of physics alone could have rearranged the fundamental particles of physics on Earth into computers and science texts and smart phones. I drew a comparison to self-replicating machines:


[I]magine that we did somehow manage to design, say, a fleet of cars with fully automated car-building factories inside, able to produce new cars — and not just normal new cars, but new cars with fully automated car-building factories inside them. Who could seriously believe that if we left these cars alone for a long time, the accumulation of duplication errors made as they reproduced themselves would result in anything other than devolution, and eventually could even be organized by selective forces into more advanced automobile models?


A More Careful Look

But I don’t think this makes sufficiently clear what a difficult task it would be to create truly self-replicating cars. So let’s look at this more carefully. We know how to build a simple Ford Model T car. Now let’s build a factory inside this car, so that it can produce Model T cars automatically. We’ll call the new car, with the Model T factory inside, a “Model U.” A car with an entire automobile factory inside, which never requires any human intervention, is far beyond our current technology, but it doesn’t seem impossible that future generations might be able to build a Model U. 


Of course, the Model U cars are not self-replicators, because they can only construct simple Model T’s. So let’s add more technology to this car so that it can build Model U’s, that is, Model T’s with car-building factories inside. This new “Model V” car, with a fully automated factory inside capable of producing Model U’s (which are themselves far beyond our current technology), would be unthinkably complex. But is this new Model V now a self-replicator? No, because it only builds the much simpler Model U. The Model V species will become extinct after two generations, because their children will be Model U’s, and their grandchildren will be infertile Model T’s! 


So Back to Work 

Each time we add technology to this car, to move it closer to the goal of reproduction, we only move the goalposts, because now we have a more complicated car to reproduce. It seems that the new models would grow exponentially in complexity, and one begins to wonder if it is even theoretically possible to create self-replicating machines. Yet we see such machines all around us in the living world. You and I are two examples. And here we have ignored the very difficult question of where these cars get the metals and rubber and other raw materials they need to supply their factories.


Of course, materialists will say that evolution didn’t create advanced self-replicating machines directly. Instead, it only took a first simple self-replicator and gradually evolved it into more and more advanced self-replicators. But beside the fact that human engineers still have no idea how to create any “simple” self-replicating machine, the point is, evolutionists are attributing to natural causes the ability to create things much more advanced than self-replicating cars (for example, self-replicating humans), which seem impossible, or virtually impossible, to design. I conceded in my earlier post (and in my video “A Summary of the Evidence for Intelligent Design”) that human engineers might someday construct a self-replicating machine. But even if they do, that will not show that life could have arisen through natural processes. It will only have shown that it could have arisen through design. 


Design by Duplication Errors

Anyway, as I wrote there, even if we could create self-replicating cars, who could seriously believe that the duplication errors made as they reproduced themselves could ever lead to major advances? (And even intelligent, conscious machines eventually.) Surely an unimaginably complex machine like a self-replicating car could only be damaged by such errors, even when filtered through natural selection. We are so used to seeing animals and plants reproduce themselves with minimal degradation from generation to generation that we don’t realize how astonishing this really is. We really have no idea how living things are able to pass their current complex structures on to their descendants, much less how they could evolve even more complex structures.


When mathematicians have a simple, clear proof of a theorem, and a long, complicated counterargument, full of unproven assumptions and questionable arguments, we accept the simple proof, even before we find the errors in the complicated counterargument. The argument for intelligent design could not be simpler or clearer: unintelligent forces alone cannot rearrange atoms into computers and airplanes and nuclear power plants and smart phones, and any attempt to explain how they can must fail somewhere because they obviously can’t. Since many scientists are not impressed by such simple arguments, my post was an attempt to point out some of the errors in the materialist’s three-step explanation for how they could. And to say that all three steps are full of unproven assumptions and questionable arguments is quite an understatement. 


At the least, it should now be clear that while science may be able to explain everything that has happened on other planets by appealing only to the unintelligent forces of nature, trying to explain the origin and evolution of life on Earth is a much more difficult problem, and intelligent design should at least be counted among the views that are allowed to be heard. Indeed, this is already starting to happen.