Search This Blog

Thursday, 1 August 2024

Against nincsnevem ad pluribus XX.

 Nincs:Mary and the Second Eve

The argument that Mary would have recognized Jesus as the Second Adam if she were truly the Second Eve misunderstands both the role of typology and Mary’s faith journey. Mary’s understanding of her Son’s mission grew over time, just as the apostles’ understanding of Jesus' messianic role developed gradually. Her initial responses, including her participation in the purification rite and her concern for Jesus' well-being, reflect her humanity and deep maternal love, not a lack of understanding of Jesus' identity.

Moreover, being the Second Eve does not imply immediate and complete knowledge of all theological implications. It indicates Mary’s unique role in salvation history as the one who, through her obedience, reversed the disobedience of Eve. Mary’s sinlessness is rooted in her unique role in God’s plan of salvation, which is why the Church venerates her as the sinless Mother of God.

Failure to acknowledge JEHOVAH'S Prophet is a sin.

John ch.8:24NIV"I told you that you would die in your sins; if you do not believe that I am he, you will indeed die in your sins.”

John ch 15:24NIV"If I had not done among them the works no one else did, they would not be guilty of sin. As it is, they have seen, and yet they have hated both me and my Father."

Mary obviously repented of her lapse into faithlessness but that is different from claiming that she was born like Eve free from inherited sin.

By way of a reminder 

Mark ch.3:21NIV"When his family heard about this, they went to take charge of him, for they said, “He is out of his mind.”

Verse 33 indicates that Jesus Mother was numbered among those not heeding the meaning of the many powerful signs JEHOVAH Was performing through Him for a time,this would have been something she would need to repent of and seek forgiveness for which she evidently did,

Against nincsnevem ad pluribus XIX

Nincs: Jesus’ Words About His Family (Matthew 12:49-50)

The passage where Jesus speaks about His disciples as His mother and brothers is often misunderstood. Jesus is not rejecting or diminishing Mary’s role; rather, He is expanding the concept of family to include all who do the will of God. This does not contradict the veneration of Mary but highlights that spiritual kinship is based on obedience to God. Mary, as the first and most perfect disciple of Jesus, who fully did the will of God, is the ultimate model of this spiritual family. Far from being an "odd" thing to say, Jesus’ words emphasize the importance of spiritual relationships in the Kingdom of 

Me:Again what I Said was that if Jesus wanted his mother to be venerated as the only other sinless woman who ever lived Besides Eve and coredemptrix and queen of heaven, this statement putting her on the the same level as any other sinful believer seems odd the Bible makes Jesus separation from sinners and his roles as the perfect priest and prophet quite clear .

Matthew ch 12:48-50NKJV"But He answered and said to the one who told Him, “Who is My mother and who are My brothers?” 49And He stretched out His hand toward His disciples and said, “Here are My mother and My brothers! 50For whoever does the will of My Father in heaven is My brother and sister and mother"

Not only that after his glorification he made no declarations through her at all preferring to use sinful men as teachers when he had a perfect sinless woman in their midst quite puzzling.

Against nincsnevem ad pluribus XVIII

 Nincs:The Law and Inherited Sin

The claim that the law only applies to those with inherited sin and thus would not apply to Mary if she were sinless misunderstands the nature of the Mosaic Law. The law was a comprehensive system that applied to all Israelites, regardless of individual sinfulness. Jesus Himself, who was without sin, was circumcised (Luke 2:21) and participated in other rites prescribed by the law. His submission to the law was not an indication of sin but a demonstration of obedience to God's commandments. Similarly, Mary's participation in the purification rite (Luke 2:22-24) was an act of obedience and humility, not an indication of sin.

Me:as tends to be the case I have to give my actual position in lieu of Mr.nevem's strawman,what I Said was that sin offerings would only apply to those conscious of sins, offerings made for the sins of the nation as a whole would be one thing but the law is clear a personal sin offering would only be made by one who has committed sins,this can be seen by the fact that confession of ones sins were involved in the ritual.

See leviticus 5


Moreover, the law’s purpose was not solely to address personal sin but to regulate the covenantal relationship between God and His people. Mary, being fully Jewish and living under the Mosaic Law, would naturally observe its requirements, even if she was preserved from original sin. This does not negate her sinlessness but shows her faithful adherence to the la

Me:The offering of whole burnt offerings would not be necessary for someone who is not conscious of sin under the law if a person kept the moral requirements of the law no sin offering would be necessary from such a person the constant offering whole burnt offerings demonstrated the ineffectiveness of the law that is why Christ could say something greater than the temple is here,

Hebrews ch.9:12NIV"He did not enter by means of the blood of goats and calves; but he entered the Most Holy Place once for all by his own blood, thus obtaining eternal redemption."


Against nincsnevem ad pluribus XVII.

 

Nincs:Your argument appears to misunderstand both the nature of God’s foreknowledge and the theological position held by many Christian traditions regarding free will and predestination.

Me:My argument is that if God has foreknown the infinite future exhaustively From the infinite past the only LOGICAL conclusion is that the infinite future has been foredetermined from the infinite past, and that if JEHOVAH is the true first and sole cause of this exhaustively foredetermined universal creation it is he who has either actively or passively exhaustively foredetermined the future, so before JEHOVAH Creates the unrepentant murderer he knows from the infinite past that he will murder unrepentantly, of course this event is foredetermined nothing else but this can happen. but the only way it can happen is if JEHOVAH CHOOSES to give the remorseless murderer a body and a mind and access to the weapons and knowledge the remorseless murderer would need to commit his crime . JEHOVAH Can choose to deny the remorseless murderer what he needs to commit his crime, he chose otherwise.

Either JEHOVAH is incapable of creating our hypothetical remorseless murderer in a way that gives him a genuine moral choice . Or he can create him with a genuine choice re: his moral development and chose not to. So this is about basic logic . As you will see Mr.nincsnevem responds in typical fashion not by demonstrating any inconsistency in my logic but by parotting the party line in his typically circular style of argumentation. 



Nincs:Firstly, it’s important to clarify that God’s foreknowledge and human foreknowledge are indeed different, but this does not negate the possibility of God knowing the future without determining it. Christian theology traditionally teaches that God, being outside of time, sees all events—past, present, and future—simultaneously. This does not mean that God determines every action that will occur; rather, it means that God knows the choices that free creatures will make. God’s knowledge is comprehensive and perfect, but it does not override or negate human free will.

Me : JEHOVAH is the first and most consequential cause of all events in the creation he is no mere passive observer of the future he creates the future actively or passively,any event JEHOVAH Foreknows he has the power to actively or passively alter,so he can foreknow several outcomes to the same chain of events, the example of the sun's rising in the east was made JEHOVAH Can easily arrange to have the sun rise in the west or any other direction or not at all habakkuk ch.3:11,

To argue then that JEHOVAH Does not have the might and wisdom to make certain aspects of the future undetermined or to alter his own previous determinations of said future is to misunderstand the scriptures true position re:JEHOVAH'S Sovereignty over his creation.

Amos ch.7:1-6NIV"This is what the Sovereign LORD showed me: He was preparing swarms of locusts after the king’s share had been harvested and just as the late crops were coming up. 2When they had stripped the land clean, I cried out, “Sovereign LORD, forgive! How can Jacob survive? He is so small!”

3So the LORD relented.

“This will not happen,” the LORD said.

4This is what the Sovereign LORD showed me: The Sovereign LORD s calling for judgment by fire; it dried up the great deep and devoured the land. 5Then I cried out, “Sovereign LORD, I beg you, stop! How can Jacob survive? He is so small!”

6So the LORD relented.

“This will not happen either,” the Sovereign LORD said..

4This is what the Sovereign LORD showed me: The Sovereign LORD was calling for judgment by fire; it dried up the great deep and devoured the land. 5Then I cried out, “Sovereign LORD, I beg you, stop! How can Jacob survive? He is so small!”

6So the LORD relented.

“This will not happen either,” the Sovereign LORD said."

JEHOVAH as the source of all the energy and information in the creation causes the future not an exhaustively predetermined future but he uses his Sovereign power to safeguard our freewill


Nincs:You mention that because the future is not fully foredetermined, it cannot be precisely foreknown. However, this claim assumes that for something to be known, it must be determined. This is not the case, especially when considering the nature of God. God's knowledge is not contingent on causality in the way human knowledge is. God’s knowledge is complete and eternal, meaning that He knows the outcomes of all free decisions without needing to cause them. This understanding preserves both the sovereignty of God and the genuine freedom of human beings.

Me:It's basic logic every contingent event/occurrence has a chain of causes that precedes it JEHOVAH Being the first cause and the source of all the information an energy in the creation. So if an outcome is inevitable the chain of causes leading up to it logically has already begun . If it was inevitable from prior to the creation then the creator himself must be included in that chain of causes he being the first cause, and bearing in mind he has the power to alter outcomes,

The only way to preserve human freedom is for morally consequential outcomes to not be inevitable from eternity,



Nincs:Regarding your assertion that Christendom posits an "apology for free will" that is "really no free will at all," this seems to be a misunderstanding of what Christian theologians, especially within Catholic and many Protestant traditions, actually teach. The doctrine of predestination, as understood in these traditions, does not imply absolute determinism. For example, the Catholic Church teaches that God predestines no one to damnation and that human beings are fully capable of making free choices that have real moral significance. The Council of Trent, for example, affirmed the reality of human free will while also upholding the necessity of divine grace.

A vain attempt to reconcile what is logically irreconcilable if an outcome is inevitable prior to my existence ,logically I have no choice, the chain of causes that rendered the outcome inevitable preceded my existence JEHOVAH would have chosen to not mitigate the chain of causes that made the outcome inevitable and thus would be culpable as the first cause in my failure.


Nincs:Your critique of "absolute predeterminism" as absurd is addressing a straw man rather than the actual beliefs of most Christian traditions. Absolute predeterminism, where all events are caused by God in a way that negates human freedom, is not a position held by mainstream Christianity. Instead, what is often taught is that God's foreknowledge includes a divine plan where human freedom plays a real and vital role. This is not absurdity but a sophisticated understanding of how divine omniscience and human freedom coexist.

Me: it is your lack of rationality that is the problem whatever outcomes that JEHOVAH foreknows are inevitable every outcome has a chain of causes preceding it if an outcome is inevitable the chain of causes leading up to it has already begun that is just the way causality and contingency works. JEHOVAH Has the power to mitigate secondary causes and alter outcomes if he chooses not to then he bears some responsibility for the outcome. So your claim that the totality of the future is foreknown is the same as saying every decision and outcome is inevitable,which is the same as saying that there is no freewill.

Nincs:Finally, your point about true moral excellence being impossible under the doctrine of predeterminism is based on a misunderstanding. In Christian thought, moral excellence is possible precisely because humans have the freedom to choose between good and evil, even within the scope of God’s omniscient knowledge. God's foreknowledge does not constrain human freedom; rather, it encompasses it, allowing for the genuine exercise of free will and moral responsibility.

Me: JEHOVAH'S Omniscience is not the issue He has the might and the right to create a universe that leaves morally consequential choices undetermined hence not inevitable if our decisions were inevitable from eternity there is no freewill and all of your circular arguments will not make it otherwise.


Nincs:In summary, the notion that God’s foreknowledge negates human free will is a misconception. Traditional Christian doctrine affirms that God’s omniscience and human freedom are compatible, and that God’s foreknowledge does not equate to predetermination. This balance between divine knowledge and human free will is what allows for true moral agency and the potential for moral excellence.

Me:it is simply logical that once an outcome is accurately foreknown it is inevitable from that point if this inevitability precedes the existence of the agent the agent cannot rightly be held responsible for the outcome. Basic logic. 

Sunday, 28 July 2024

More on the search for a third way.

 

On striking the Cambrian jackpot.

 Fossil Friday: Cambrian Explosion Bingo Continues


This Fossil Friday features the weird critter Hallucigenia from the Cambrian Burgess Shale, as we discuss the most recent contribution to the Cambrian Explosion bingo game, which is how I prefer to call the popular exercise in wild speculation and unsubstantiated guesswork among evolutionary biologists to explain the abrupt appearance of animal body plans in the Cambrian Explosion about 535-515 million years ago. Among the many different causes that have been proposed as alleged drivers of the Cambrian Explosion, an increase in oxygen levels represents one of the most popular alternatives (e.g., see Zhang & Cui 2016, He et al. 2019). It was claimed that “oxygen linked with the boom and bust of early animal evolution” (University of Oxford 2019). Even as recently as two years ago, scientists found that there were “pulses of atmosphere oxygenation during the Cambrian radiation of animals” and “oxygen availability was a crucial factor in accelerating the radiation of marine animals” (Jiang et al. 2022).

This just in

Now, a new study by Stockey et el (2024), just published in the journal Nature Geoscience, did not “find evidence for the wholesale oxygenation of Earth’s oceans in the late Neoproterozoic era”, but instead just a “moderate long-term increase”. The authors suggest that this small increase “provides some of the most direct evidence for potential physiological drivers of the Cambrian radiation.” Consequently the press releases and media headlines cheered that scientists found that “a rapid burst of evolution 540 million years ago could have been caused by a small increase in oxygen” (Castañón 2024), and a “small change in Earth’s oxygen levels may have sparked huge evolutionary leap” (University of Southampton 2024), and “life only needed a small amount of oxygen to explode” (Watson 2024). This is quite surprising, as the latter author explicitly admitted that “it’s long been thought that a monumental surge in oxygen fuelled the Cambrian explosion” (Watson 2024). However, suddenly it allegedly was not a monumental surge but just a small long-term increase that made this miracle happen. Hey, no big deal, they just creatively changed the narrative.

Who Cares About Yesterday’s Petty News?

 bet that if scientists were to discover next month that there was no oxygenation in the Cambrian but the exact opposite, they would quickly reverse their just-so-story and claim that it was lower oxygen levels that caused the Cambrian Explosion. If you doubt that distinguished scientists could or would ever be that sloppy and cunning, just look what they did with the event that preceded the Cambrian Explosion in the Ediacaran. Spoiler alert: they did exactly that, which I already discussed at length in a previous article (Bechly 2023a) and podcast (Bechly 2023b). Check it out if you want to dig deeper down this rabbit hole.

References

Bechly G 2023a. Fossil Friday: Seventy Years of Textbook Wisdom on Origin of Multicellular Life Turns Out to Be Wrong. Evolution News September 1, 2023. https://evolutionnews.org/2023/09/fossil-friday-seventy-years-of-textbook-wisdom-on-the-origin-of-multicellular-life-turns-out-to-be-wrong/
Bechly G 2023b. Günter Bechly on Why Seventy Years of Textbook Wisdom Was Wrong. ID the Future episode 1813. https://idthefuture.com/1813/
Castañón L 2024. Revisiting the Cambrian explosion’s spark. Stanford Report July 2, 2024. https://news.stanford.edu/stories/2024/07/revisiting-the-cambrian-explosion-s-spark
He T, Zhu M, Mills BJW et al. 2019. Possible links between extreme oxygen perturbations and the Cambrian radiation of animals. Nature Geoscience 12, 468–474. DOI: https://doi.org/10.1038/s41561-019-0357-z
Jiang L, Zhao M, Shen A, Huang L, Chen D & Cai C 2022. Pulses of atmosphere oxygenation during the Cambrian radiation of animals. Earth and Planetary Science Letters 590: 117565. DOI: https://doi.org/10.1016/j.epsl.2022.117565
Stockey RG, Cole DB, Farrell UC et al. 2024, Sustained increases in atmospheric oxygen and marine productivity in the Neoproterozoic and Palaeozoic eras. Nature Geoscience. DOI: https://doi.org/10.1038/s41561-024-01479-1
University of Oxford 2019. Oxygen linked with the boom and bust of early animal evolution. University of Oxford News & Events May 13, 2019. https://www.ox.ac.uk/news/2019-05-13-oxygen-linked-boom-and-bust-early-animal-evolution
University of Southampton 2024. Small change in Earth’s oxygen levels may have sparked huge evolutionary leap. Phys.org July 2, 2024. https://phys.org/news/2024-07-small-earth-oxygen-huge-evolutionary.html
Watson C 2024. Life Only Needed A Small Amount of Oxygen to Explode, Scientists Find. ScienceAlert July 7, 2024. https://www.sciencealert.com/life-only-needed-a-small-amount-of-oxygen-to-explode-scientists-find
Zhang X & Cui L 2016. Oxygen Requirements for the Cambrian Explosion. Journal of Earth Science 27(2), 187–195. DOI: https://doi.org/10.1007/s12583-016-0690-8

Thursday, 25 July 2024

Mutation is no friend of Darwinism?

 On Developmental Gene Regulatory Networks, the Scientific Literature Supports Stephen Meyer


In  a post yesterday we saw that Stephen Meyer wrote extensively about evo-devo in Darwin’s Doubt, effectively answering biologist Gerd Müller’s preferred evolutionary model for how new body plans arise. If I could boil down Meyer’s arguments to three points, they would be:

Evo-devo focuses on the role of special early-acting mutations in developmental processes to generate new body plans, but over 100 years of mutagenesis experiments show that mutations in genes regulating development are invariably deleterious (or in some cases have only trivial effects). Meyer summarizes: “This generates a dilemma: major changes are not viable; viable changes are not major. In neither case do the kinds of mutation that actually occur produce viable major changes of the kind necessary to build new body plans.”
We see these deleterious effects particularly in experiments on developmental gene regulatory networks (dGRNs), complex networks of gene-interaction which regulate the expression of genes early in development as an organism’s body plan begins to grow. After reviewing experimental work on dGRNs, Meyer finds that, “These dGRNs cannot vary without causing catastrophic effects to the organism.” 
These experimental results on dGRNS have profound implications for organismal evolution, because if changes to dGRNs are lethal to an embryo, how can they be modified to explain how new body plans evolve? Meyer’s writes in the book: “The system of gene regulation that controls animal-body-plan development is exquisitely integrated, so that significant alterations in these gene regulatory networks inevitably damage or destroy the developing animal. But given this, how could a new animal body plan, and the new dGRNs necessary to produce it, ever evolve gradually via mutation and selection from a preexisting body plan and set of dGRNs?” (Darwin’s Doubt, p. 269)
Gerd Müller is aware that Meyer has talked about dGRNs, because Meyer mentioned them (albeit briefly) on the Joe Rogan podcast last year, and Müller even made a comment in response, saying: “he [Meyer] mentions gene regulatory networks but stops short of making the obvious argument that mutations in these gene regulatory networks you don’t need so many random mutations to create an important change of the phenotype.” But if Meyer is correct then random mutations in dGRNs are lethal to the embryo

The Literature Supports Meyer’s Arguments

Meyer was justified in making these arguments. The work of the late Caltech developmental biologist Eric Davidson, an eminent expert in the field of evo-devo, shows that mutations in genes that affect body plan characteristics (which tend to be expressed early, as the body plan is being put in place) don’t lead to new body plans — they lead to dead embryos. Meyer wrote about Davidson in Darwin’s Doubt, as we saw yesterday. But it’s worth providing some more expansive background in Davidson’s own words:

[T]here is a high penalty to change [in dGRNs], in that interference with the dynamic expression of any one of the genes causes the collapse of expression of all, and the total loss from the system of their contributions to the regulatory state … there is always an observable consequence if a dGRN subcircuit is interrupted. Since these consequences are always catastrophically bad, flexibility is minimal, and since the subcircuits are all interconnected, the whole network partakes of the quality that there is only one way for things to work. And indeed the embryos of each species develop in only one way.

[…]

A few years ago remarkably conserved subcircuits, termed network “kernels” that operate high in the dGRN hierarchy were discovered. … the kernels similarly canalize downstream developmental process in each member of each given clade.

[…]

Evolutionary inflexibility due to highly conserved canalizing dGRN kernels

As discussed above these subcircuits operate at upper levels of dGRN hierarchy so as to affect characters of the body plan that are definitive for upper level taxa, i.e., they control the early stages of just the types of developmental process of which the invariance per taxon constitutes our problem. Since they preclude developmental alternatives, they may act to “booleanize” the evolutionary selective process: either body part specification works the way it is supposed to or the animal fails to generate the body part and does not exist.

ERIC DAVIDSON, “EVOLUTIONARY BIOSCIENCE AS REGULATORY SYSTEMS BIOLOGY,” DEVELOPMENTAL BIOLOGY, 357:35-40 (2011)

Or this

Interference with expression of any [genes in the dGRN kernel] by mutation or experimental manipulation has severe effects on the phase of development that they initiate. This accentuates the selective conservation of the whole subcircuit, on pain of developmental catastrophe.

DAVIDSON AND ERWIN. “AN INTEGRATED VIEW OF PRECAMBRIAN EUMETAZOAN EVOLUTION,” COLD SPRING HARBOR SYMPOSIA ON QUANTITATIVE BIOLOGY, 74: 1-16 (2010)

This intolerance of body plan-affecting dGRNs to fundamental perturbations indicates that they could not have evolved by undirected mutations. Many coordinated mutations would be needed to convert one functional dGRN that generates a particular body plan into a different dGRN that generates a different body plan. 

The classic rejoinder

Meyer is also well aware of what evo-devo proponents say in response to these arguments and he has a ready rebuttal. The classic rejoinder from evo-devo proponents is to propose that perhaps in the past somehow dGRNs were more “labile” or “flexible” and easier to evolve. Indeed, Davidson acknowledges that something must have been different when body plans first evolved, which removed this resistance to change:

Deconstructing the evolutionary process by which stem group body plans were stepwise formulated will require us to traverse the conceptual pathway to dGRN elegance, beginning where no modern dGRN provides a model. The basic control features of the initial dGRNs of the Precambrian and early Cambrian must have differed in fundamental respects from those now being unraveled in our laboratories. The earliest ones were likely hierarchically shallow rather than deep, so that in the beginning adaptive selection could operate in a larger portion of their linkages. Furthermore, we can deduce that the outputs of their sub-circuits must have been polyfunctional rather than finely divided and functionally dedicated, as in modern crown group dGRNs….

ERIC DAVIDSON, “EVOLUTIONARY BIOSCIENCE AS REGULATORY SYSTEMS BIOLOGY,” DEVELOPMENTAL BIOLOGY, FEBRUARY 2011

Davidson says there that “no modern dGRN provides a model” for how new dGRNs might evolve. Therefore he believes that, when new body plans arose, dGRNs “must have differed in fundamental respects from those now being unraveled in our laboratories.” Davidson is not the only evolutionary scientist to use this form of argument. Paleontologist Charles Marshall said much the same in 2013 when responding in the journal Science to Meyer’s arguments in Darwin’s Doubt regarding dGRNs. Marshall argued that although Meyer is correct to observe that “manipulation of such networks is typically lethal,” this is not a problem for evolution because “GRNs at the time of the emergence of the phyla were not so encumbered.” Indeed, Müller’s intermediary, Forest Valkai, makes a similar (though less eloquently stated) argument in the video attacking Meyer. 

But how does Marshall or Valkai or anyone know that dGRNs were so different in the past? Similarly, how does Davidson know that early dGRNs “must have differed in fundamental respects” from those we observe? Do they know this from experiments and direct observation, or from evolutionary theory itself? The answer is evolution; more precisely, the common descent of the animals. If the animal phyla shared a common ancestor that was itself a developing species, dGRNs of the past must have been more “flexible” or “labile” — totally unlike what we observe today.

But would such a flexible or labile dGRN actually produce a viable animal? We don’t know, because we have no observational evidence. As such, to salvage evo-devo models of evolution from the contrary experimental data, Davidson and Marshall reverse the normal method of historical sciences. Present-day observations are no longer the key to the past. Rather, a theoretical model dictates what must have happened in the past — even if that model contradicts what we know from the evidence. Meyer put it this way in the Epilogue to Darwin’s Doubt

By ignoring this evidence, Marshall and other defenders of evolutionary theory reverse the epistemological priority of the historical scientific method as pioneered by Charles Lyell, Charles Darwin, and others. Rather than treating our present experimentally based knowledge as the key to evaluating the plausibility of theories about the past, Marshall uses an evolutionary assumption about what must have happened in the past (transmutation) to justify disregarding experimental observations of what does, and does not, occur in biological systems. The requirements of evolutionary doctrine thus trump our observations about how nature and living organisms actually behave. What we know best from observation takes a backseat to prior beliefs about how life must have arisen. 

What we do know from experience, however, is that large increases in functionally specified information — especially information expressed in an alphabetic or digital form — are always produced by conscious and rational agents. So the best explanation for the explosion of information necessary to produce the Cambrian animals (whether that explosion occurred during or before the Cambrian period) remains intelligent design.

DARWIN’S DOUBT, P. 448

What this means is that although evo-devo has some interesting ideas, evolutionary biology currently lacks a model that is validated by experimental evidence showing that dGRNs — and hence body plans — are mutable and capable of evolving from one form to another.  

More Evo-Devo Problems

But Meyer isn’t done recounting problems with evo-devo-based models of evolution. In Darwin’s Doubt he offers additional reasons why mutations in Hox genes can’t build new body structures:

Third, Hox genes only provide information for building proteins that function as switches that turn other genes on and off. The genes that they regulate contain information for building proteins that form the parts of other structures and organs. The Hox genes themselves, however, do not contain information for building these structural parts. In other words, mutations in Hox genes do not have all the genetic information necessary to generate new tissues, organs, or body plans. 

Nevertheless, Schwartz argues that biologists can explain complex structures such as the eye just by invoking Hox mutations alone. He asserts that “[t]here are homeobox genes for eye formation and that when one of them, the Rx gene in particular, is activated in the right place and at the right time, an individual has an eye.” He also thinks that mutations in Hox genes help arrange organs to form body plans.

In a review of Schwartz’s book, Eörs Szathmáry finds Schwartz’s reasoning deficient. He too notes that Hox genes don’t code for the proteins out of which body parts are made. It follows, he insists, that mutations in Hoxgenes cannot by themselves build new body parts or body plans. As he explains, “Schwartz ignores the fact that homeobox genes are selector genes. They can do nothing if the genes regulated by them are not there.” Though Schwartz says he has “marveled” at “the importance of homeobox genes in helping us to understand the basics of evolutionary change,” Szathmáry doubts that mutations in these genes have much creative power. After asking whether Schwartz succeeds in explaining the origin of new forms of life by appealing to mutations in Hox genes, Szathmáry concludes, “I’m afraid that, in general, he does not.”

Nor, of course, do Hox genes possess the epigenetic information necessary for body-plan formation. Indeed, even in the best of cases mutations in Hox genes still only alter genes. Mutations in Hox genes can only generate new genetic information in DNA. They do not, and cannot, generate epigenetic information.

Instead, epigenetic information and structures actually determine the function of many Hox genes, and not the reverse. This can be seen when the same Hox gene (as determined by nucleotide sequence homology) regulates the development of different anatomical features found in different phyla. For instance, in arthropods the Hox gene Distal-less is required for the normal development of jointed arthropod legs. But in vertebrates a homologous gene (e.g., the Dlx gene in mice) builds a different kind of (nonhomologous) leg. Another homologue of the Distal-less gene in echinoderms regulates the development of tube feet and spines — anatomical features classically thought not to be homologous to arthropod limbs, nor to limbs of tetrapods. In each case, the Distal-less homologues play different roles determined by the higher-level organismal context. And since mutations in Hox genes do not alter higher-level epigenetic contexts, they cannot explain the origin of the novel epigenetic information and structure that establishes the context and that is necessary to building a new animal body plan.

DARWIN’S DOUBT, PP. 320-321

What we see from the passage quoted above, as well as from Meyer’s extensive discussion of evo-devo in Darwin’s Doubt, is that Gerd Müller is far from correct in comparing Meyer to a “gene reductionist” who thinks that only mutations in genes are needed to evolve new types of organisms. In contrast, Meyer is well aware of non-neo-Darwinian models of organismal evolution like evo-devo, which focus on the role of mutations in changing regulatory networks of genes to generate radically new body plans. Meyer also explores how non-genetic or epigenetic information is necessary to generate new body plans, and how evolutionary models seem unable to produce this information as well. 

In sum, Meyer has offered extensive arguments about evo-devo generally, and in particular about dGRNs. He shows that dGRNs can’t change significantly without development shutting down, which means that there is a problem with finding mutations that can suddenly produce large-scale changes to radically alter the body plan of an organism. Since all evolution requires change, this problem applies across the board to all evolutionary claims about the origin of new body plans, not just neo-Darwinian models. The Joe Rogan podcast last year only afforded the opportunity to scratch the surface, but it’s clear that Meyer has a lot to say and that Müller really has not responded to him in any relevant detail

An Invitation to Dialogue

As a fifth and final point, we would love to hear what Professor Müller thinks of all of this. Clearly, Meyer has invested a lot of time and energy into addressing Müller’s field of evo-devo and has developed detailed, careful arguments about the viability of evo-devo-based models. I would welcome a response from Professor Müller. However, I think that our website, Evolution News, would be a better place for dialogue than Professor Müller’s responding indirectly through an angry YouTuber’s channel, with all the personal attacks and other distasteful antics that go with that particular venue. Müller deserves better.

We would be happy to host such a dialogue here at Evolution News, and I therefore invite Professor Müller to send us a robust response to Stephen Meyer’s arguments about evo-devo models of evolution. In that way, real progress can be made in this conversation

Monday, 22 July 2024

Rehabilitating Darwinism?

 

Yet more uncommon descent?

 Fossil Friday: Saber-Toothed Tigers Originated Multiple Times


This Fossil Friday features the saber-toothed tiger Smilodon populator from the Pleistocene of Brazil. Just like dinosaurs and mammoths, saber-toothed tigers are among the most iconic prehistoric animals. The La Brea Tar Pits near Los Angeles are just one of the famous fossil localities where well-preserved skeletons of saber-toothed tigers have been found.

A recent study by scientists from the University of Liege (2024) looked into the origin of this peculiar dental trait. Actually, unlike many other instances of biological novelty saber teeth did not appear abruptly and do not represent a morphological discontinuity, but rather show a continuum of sizes and shapes of the canine teeth in cat-like carnivores. This is not so surprising, as the character of saber teeth is not a complex one, but rather just an instance of allometric differential change of size and shape, which might well be within the realm of gradualist Darwinian mechanisms.

Signs of Design

However, there is another phenomenon concerning saber-toothed predators that may suggest design. The comparison of the different saber-toothed cat-like animals shows that they do not form a clade of most closely related forms, so that “sabertooth morphology stands as a classic case of convergence, manifesting recurrently across various vertebrate groups” (Chatar et al. 2024). These include the Miocene Barbourofelidae, the Nimravidae, which lived from the Eocene to the Miocene in Eurasia and North America, and of course the saber-toothed tiger subfamily Machairodontinae among true felids, which had a wide distribution from the Miocene to the Pleistocene. But saber-tooth morphology is also found in gorgonopsid mammal-like reptiles from the Permian period, the marsupial Thylacosmilidae from the Neogene of South America, as well as the machaeroidine Oxyaenidae from the Eocene of Asia and North America.

Convergence is a ubiquitous phenomenon in biology and a genuine problem for Darwinism, which calls for an alternative explanation. This has been recognized by some mainstream scientists such as the famous paleontologist Simon Conway Morris, who authored several books (Conway Morris 2003, 2015) and scientific articles on this subject. The reuse of the same design in different independent instantiations is a typical design pattern in engineering. The same idea is applied whenever it makes sense, in different instantiations

What Causal Mechanism?

To  me this suggests that the evolution of saber-toothed cats may well have been gradual as suggested by the new study (Chatar et al. 2024; also see University of Liege 2024) but was rather a teleological, goal-directed process than an unguided neo-Darwinian mechanism of natural selection acting on random variations. This is also supported by the fact that the new study showed that “rapid evolutionary rates emerge as key components in the development of a sabertooth morphology in multiple clades” (Chatar et al. 2024) and “saber-toothed species seemed to show faster changes to skull and jaw shapes earlier in their evolutionary history than species with shorter canines — essentially a ‘recipe’ for evolving into saber-toothed feline-like predators” (Chatar quoted in Smaglik 2024). What causal mechanism accelerated the evolutionary speed? According to evolutionists (Chatar et al. 2024), “a rapid burst at the beginning of the nimravid evolutionary history” just happened, and likewise “machairodontine felids rapidly moved away from the most common cat-like morphology.” No explanations offered, but no intelligence allowed either. Maybe scientists should stop shutting their eyes and ears to what nature wants to tell them.

References

Chatar N, Michaud M, Tamagnini D Fischer V 2024. “Evolutionary patterns of cat-like carnivorans unveil drivers of the sabertooth morphology.” Current Biology 34(11), 2460–2473. DOI: https://doi.org/10.1016/j.cub.2024.04.055
Conway Morris S 2003. Life’s Solution: Inevitable Humans in a Lonely Universe. Cambridge: Cambridge University Press.
Conway Morris S 2015. The Runes of Evolution: How the Universe Became Self-Aware. Templeton Press, West Conshohocken (PA), 528 pp.
Smaglik P 2024. “Saber Teeth are as Mysterious Evolutionarily as They are Iconic Visually.” Discover Magazine May 17, 2024. https://www.discovermagazine.com/the-sciences/saber-teeth-are-as-mysterious-evolutionarily-as-they-are-iconic-visually
University de Liege 2024. “How saber-toothed tigers acquired their long upper canine teeth.” Phys.org May 16, 2024. https://phys.org/news/2024-05-saber-toothed-tigers-upper-canine.html

Sunday, 21 July 2024

The search for straight answers from trinitarians continues.

 Matthew ch.4:10NIV"Jesus said to him, “Away from me, Satan! For it is written: ‘Worship the LORD your God, and serve him ONLY.’”"

Would rendering sacred service to Jesus alone be sufficient to be in compliance with this command?

If so, would serving either the Father or spirit then be a violation of this command?

Would serving either the Son or Father or Spirit alone be sufficient to be compliant with this instruction?

 If so , having begun serving a particular member of the Trinity would switching allegiance to another member be considered a violation of Matthew ch.4:10,

Would serving the Trinity as a whole alone be enough to obey this command?

If so ,would serving any member of the Trinity be a violation of this command?

The technology of water vs. Darwin

The Properties of Water Point to Intelligent Design


In a previous article, I provided an overview of the remarkable coincidences that allow photosynthesis (a process required for the existence of advanced life) to take place. The final example I discussed concerned the transparency of water, facilitating the penetration of visual light through the aqueous cytoplasm of the cell to access the chloroplasts. There are, however, a plethora of other properties of water that appear to be uniquely fit to support life. Here, I shall survey a few of these.

Less Dense in its Solid Form

Unlike almost all other substances, water expands and becomes less dense in its solid form than it is in its liquid form. Ice has an open structure that is sustained by the hydrogen bonds between water molecules. If ice behaved like almost all other substances (a notable exception being the metal gallium, which also expands on freezing), it would sink to the bottom and the oceans would freeze from the bottom up, leading to much of our planet being permanently encased in ice — since the ice beneath the water would be shielded from the warmth of the sun’s rays. Since ice expands upon freezing, however, it insulates the water beneath the surface, keeping it in its liquid form. This property of water is essential to complex life, both marine and terrestrial.

Dissolving Minerals

Water is also a nearly universal solvent, and this property is critical to its role in dissolving minerals from the rocks. Indeed, almost all known chemicals dissolve in water to at least some extent. The solubility of carbon dioxide in water and its reaction with water to yield carbonic acid also promotes chemical reactions with these minerals, increasing their solubility.

Water also has an extremely high surface tension (second only to mercury of any common fluid). As water is drawn into fissures (because of its high surface tension) and expands upon freezing, the surrounding rocks are split open, thereby conferring a greater surface area for chemical weathering.

The Hydrological Cycle

For life on land to thrive, the dissolved minerals also must be deposited on land, which is made possible by the hydrological cycle whereby the water from the oceans evaporates into the atmosphere and returns to the ground as rain or snow. The hydrological cycle is itself made possible by water’s existence in three states (solid, liquid, and gas) in the range of ambient temperatures at the earth’s surface. This ability to exist in three different states at the ambient conditions at the earth’s surface is unique among all known substances. Were it not for this unique property of water, the land masses of our planet would exist as a barren dessert. Michael Denton remarks concerning this remarkable property: “the delivery of water to the land is carried out by and depends upon the properties of water itself. Contrast this with our artifactual designs, where key commodities such as clothes or gasoline must be delivered by extraneous delivery systems such as trucks and trains. Gasoline cannot deliver itself to gas stations nor clothes to clothing stores. But water, by its own intrinsic properties, delivers itself to the land via the hydrological cycle.”1

Ideal for the Circulatory System

Various properties of water also make it an ideal medium for the circulatory system of complex organisms like ourselves. Concerning water’s supreme quality as a solvent, the early 20th-century physiologist Lawrence Henderson remarked, “It cannot be doubted that if the vehicle of the blood were other than water, the dissolved substances would be greatly restricted in variety and in quantity, nor that such restriction must needs be accompanied by a corresponding restriction of life processes.”2

Another characteristic of water is that its viscosity is one of the lowest of any known fluid. The pressure that is needed to pump a fluid increases proportionally with its viscosity. Therefore, if the viscosity of water were significantly increased, it would become prohibitively difficult to pump the blood through the circulatory system. Denton notes that “the head of pressure at the arterial end of a human capillary is thirty-five mm Hg, which is considerable (about one-third that of the systolic pressure in the aorta). This relatively high pressure is necessary to force the blood through the capillaries. This would have to be increased massively if the viscosity of water were several times higher, and is self-evidently impossible and incommensurate with any sort of biological pump.”3 Given that approximately 10 percent of the body’s resting energy is spent on powering the circulatory system, increasing the viscosity of water — to that of olive oil, for example — would present an insurmountable energetic challenge. The viscosity of a fluid is also inversely proportional to its diffusion rate, and so increasing the viscosity of water would have a significant impact on the rate of diffusion from capillaries to the cells of the body.

Specific Heat Capacity, and Evaporative Cooling

Water, furthermore, has one of the highest specific heat capacities of any known fluid. By serving to retard the cooling rate, this property conserves water in its liquid form when it comes into contact with air that is below freezing temperature. Another remarkable feature of water is its evaporative cooling effect. As water evaporates from an object’s surface, the molecules with more kinetic energy escape as a gas, whereas those with lower kinetic energy remain in liquid form. This serves to reduce the surface temperature. The evaporative cooling effect of water is in fact higher than that of any other known molecular liquid — i.e., compounds composed of two or more types of atoms. This characteristic of water is particularly important for warm-blooded organisms when the external temperature is warmer than their core body temperature and thus the excess heat cannot be radiated out into the environment. Instead, excess heat is lost through the evaporative cooling effect of water, maximized by numerous sweat glands on the skin surface.

An Overwhelming Case

For  a much more detailed treatment of this subject, I refer readers to Michael Denton’s book The Wonder of Water: Water’s Profound Fitness for Life on Earth and Mankind
                  As the number of examples of the fine-tuning of nature for advanced life mount, it becomes increasingly difficult to deny what Fred Hoyle called a “common sense interpretation of the facts,” namely, “that a superintellect has monkeyed with physics, as well as with chemistry and biology, and that there are no blind forces worth speaking about in nature.”4 The evidence that our universe was designed with life in mind also raises the intrinsic plausibility (i.e., the prior probability) of intelligent design as an explanation of biological systems.

Notes

Michael Denton, The Miracle of Man: The Fine Tuning of Nature for Human Existence (Discovery Institute Press, 2022), 34
Lawrence J. Henderson, The Fitness of the Environment: An Enquiry into the Biological Significance of the Properties of Matter (McMillan, 1913), 116.
Michael Denton, The Wonder of Water: Water’s Profound Fitness for Life on Earth and Mankind (Discovery Institute Press, 2017), 161-162.
Fred Hoyle, “The Universe: Past and Present Reflections,” Engineering and Science, November 1981, 8–12.


Saturday, 20 July 2024

More on why you need to get ready to welcome your AI overlords.

 

There's science then there's Science?

 

Life's beginning just keeps getting less and less simple.

 Study Finds Life’s Origin “Required a Surprisingly Short Interval of Geologic Time”


An article at ScienceAlert reports, “Gobsmacking Study Finds Life on Earth Emerged 4.2 Billion Years Ago.” They write, “By studying the genomes of organisms that are alive today, scientists have determined that the last universal common ancestor (LUCA), the first organism that spawned all the life that exists today on Earth, emerged as early as 4.2 billion years ago.” The article then offers an intriguing point about the rapidity with which life appeared on Earth:

Earth, for context, is around 4.5 billion years old. That means life first emerged when the planet was still practically a newborn.

The technical paper in Nature Ecology and Evolution notes that they used not fossil evidence to arrive at such an early date of life on Earth, but molecular clock techniques. The claim that life existed on Earth at 4.2 billion years ago (also noted as “4.2 Ga”) is consistent with some geological evidence (see below), but life at such an early stage is certainly not expected. Some will surely claim that it’s impossible because the heavy bombardment period which frequently saw the Earth sterilized by impacts had not yet concluded. Here’s some of the best early fossil evidence of life on Earth (Ma means “millions of years” ago):

Potential filamentous microfossils from Canada: >3750 – 4280 Ma (Papineau et al., 2022)
Microfossils from Canada: >3770 Ma (Dodd et al., 2017)
δ13C — Excess light carbon: 3.7 Ga. (Rosing, 1999, Ohtomo et al., 2014)
Stromatolites from Greenland: ~3700 Ma (Nutman et al., 2016)
Stromatolites from Western Australia: 3480 Ma (Van Kranendonk et al. 2008, Walter et al., 1980)
As you can see, most of the early fossil evidence of life on Earth is significantly younger than 4.2 Ga, but the possibility of life at 4.2 Ga is allowed by one study. Despite this potential consistency with some fossil evidence, there are multiple reasons to be skeptical of the article’s methods. 

Genetic and Phenotypic Traits

First, it infers the genetic and phenotypic traits of LUCA by assuming that biological similarity always results from common ancestry — and never from common design. This dubious logic is seen in the opening statement from the technical paper which reads, “The common ancestry of all extant cellular life is evidenced by the universal genetic code, machinery for protein synthesis, shared chirality of the almost-universal set of 20 amino acids and use of ATP as a common energy currency.” It’s true that all life uses those components (although the genetic code is not exactly universal), but this does not provide special evidence for common ancestry because the commonality of these similar features could be explained by common design due to their functional utility. After all, the optimization of the genetic code to minimize the effects of mutations upon amino acid sequences has been cited as potential evidence for intelligent design — showing that there could be good reasons for a designer to re-use the standard genetic code across many organisms.

Second, there are fundamental components of life that show great differences across different types of organisms. For example, the mechanisms of DNA replication and cell division in prokaryotes and eukaryotes are highly distinct. Ribosomes in prokaryotes and eukaryotes have fundamental differences, as one paper explains: “Structures of the bacterial ribosome have provided a framework for understanding universal mechanisms of protein synthesis. However, the eukaryotic ribosome is much larger than it is in bacteria, and its activity is fundamentally different in many key ways.” Many other examples could be given.

Third, the paper uses molecular clock methods to date the timing of LUCA, and molecular clock techniques are problematic for many reasons: they’re highly assumption-dependent and notoriously variant, unreliable, and controversial.

Intriguing Implications

All that said, it’s certainly not impossible that life was already present on Earth at 4.2 Ga. And if it were true it would have intriguing implications. As the study concludes:

The result is a picture of a cellular organism that was prokaryote grade rather than progenotic and that probably existed as a component of an ecosystem, using the WLP for acetogenic growth and carbon fixation. … How evolution proceeded from the origin of life to early communities at the time of LUCA remains an open question, but the inferred age of LUCA (~4.2 Ga) compared with the origin of the Earth and Moon suggests that the process required a surprisingly short interval of geologic time.

This suggests that not only did the origin of life occur very soon after the Earth formed but life diversified into a prokaryotic cellular form very soon as well.

The notion that life appeared on Earth shortly after it became habitable is not new. In the past, experts have said just that. For example:

Stephen Jay Gould: “[W]e are left with very little time between the development of suitable conditions for life on the earth’s surface and the origin of life.” (“An Early Start,” Natural History 87 (February, 1978))
Cyril Ponnamperuma: “[W]e are now thinking, in geochemical terms, of instant life…” (Quoted in Fred Hoyle and Chandra Wickramasinghe, Evolution from Space (New York, NY: Simon & Schuster, 1981))

Widespread Life in the Universe?

I don’t think Gould or Ponnamperuma would have anticipated life as early as 4.2 Ga. If such a timeframe is correct, however, it is extraordinary indeed. The ScienceAlert article also gets this point, stating, “This implies that it takes relatively little time for a full ecosystem to emerge … It also demonstrates just how quickly an ecosystem was established on early Earth. This suggests that life may be flourishing on Earth-like biospheres elsewhere in the Universe.” The last point — their punchline about astrobiology and the existence of life elsewhere — of course assumes that life on Earth originated naturally in the first place. It also seems to further assume that, under the right conditions, life originates easily. If it has sprung up early and easily on multiple other planets, according to this naturalist way of thinking, shouldn’t it have sprung up multiple times on Earth, too? And yet universal common ancestry denies that this is so. To all appearances, that’s a conundrum for the naturalist.

But a single origin of terrestrial life has not been established by this study. The most that has been demonstrated is that life appeared early in Earth’s history. Given the difficulties surrounding a natural origin of life, a better inference might be to take this evidence of life’s rapid appearance as evidence that it did NOT arise naturally and required intelligent design.

A bottomless pit?

 MDs Support Expanding Assisted Suicide Beyond the Terminally Ill


The myth that legal assisted suicide is about terminal illness is becoming harder to swallow. Evidence can be found in a recent survey of doctors, published in the Journal of Cutaneous Oncology, which asked doctors this question: : “In addition to adults with terminal illnesses, [which] other groups of patients” should be eligible for MAID?

The answers are disturbing. From the survey:

Adults with intractable psychiatric conditions: 30 percent
Children with terminal conditions: 45 percent
Adults with intractable chronic pain: 55 percent
Adults with late stage dementia: 70 percent
Adults in persistent vegetative state: 80 percent
Majorities of doctors surveyed answered that they would be willing to be present when the deed is done. Here’s the question: “If it were available (or is available), what is your willingness to be present when patients took MAID drugs?” Again, disturbing results, with 61 percent either probably or definitely, yes:

Definitely not: 6 percent
Probably not: 33 percent
Probably yes: 39 percent
Definitely yes: 22 percent
That’s only a hop, skip, and a jump to willingness to do the deed. And no doctors would definitely refuse to “refer for MAID.”

A Terrifying Survey

This survey should terrify anyone who believes in Hippocratic medical values. And it illustrates the impact that constant boosting of assisted suicide in the media and popular culture, utilitarian bioethics training in medicine, and the corrupting cultural paradigm shift in which many believe eliminating suffering should be the prime directive of society, has had on the professional sector that should be most protective of vulnerable patients.

It should be noted that the push is already on to expand eligibility beyond the dying. That is the plan, you know, just as happened in other countries. This survey is but one example of the softening of the ground. Also, the California bill filed but not passed this year that would have opened doctor-prescribed death way beyond the terminally ill.

Indeed, that is the debate we should be having — whether euthanasia should be available to broad categories of suffering people — not the phony-baloney dishonest pretense that assisted suicide/euthanasia is meant over the long haul to be a tightly restricted practice reserved for the dying.

There will be consequences. If this drift continues, we will one dark day end up like Canada, where more than 15,000 patients were killed by doctors in 2023, in a milieu in which cancer patients who couldn’t obtain proper oncology care were euthanized, and where people with disabilities report being pressured by medical personnel and social workers into “choosing” death.


Friday, 19 July 2024

File under "well said" CIX.

"Most people use statistics like a drunk man uses a lamppost; more for support than illumination "

Andrew Lang

Thursday, 18 July 2024

On separating the wheat from chaff re:science.

 Three Genuine Tells of Junk Science


Capital Research Center reports on non-profit organizations. Managing editor Jon Rodeback offers three tells of junk science: He identifies “settled,” “consensus,” and “scientific study.” On that last topic, he notes,
                                 While scientific studies are essential to scientific research, a single study by itself is far from definitive, and not all scientific studies are created equal. The findings of a single study need to be tested and retested, no matter how promising they seem. In fact, the most promising findings probably need more rigorous testing to ensure that a bias toward a desired outcome did not influence the research.

In addition, the more a study or report is entangled with politics and government funding, the less scientific and less reliable its results will likely be. I have personally witnessed how a government report was vetted by the various offices in a federal department and offending passages were removed or rewritten so as to not cast a particular federal office in a bad light—usually not to correct any inaccuracy in the report, but to obscure inconvenient data and conclusions. 

JON RODEBACK, “THREE TELLS OF JUNK SCIENCE,” CAPITAL RESEARCH CENTER, JUNE 26, 2024

This comes to us hard on the heels of philosopher Massimo Pigliucci’s effort to identify “pseudoscience,” in which he suggested that the solution is to rely on him and on sites he approves of. That’s certainly not an answer for everyone

How Desired Results Are Obtained

Perhaps the main thing to see here is that the many current problems in peer-reviewed science in recent years have diminished the reasons we should simply trust it. Business prof Gary Smith Wrote late last year about the methods used to achieve a desired — but not necessarily natural — result:

One consequence of the pressure to publish is the temptation researchers have to p-hack or HARK. P-hacking occurs when a researcher tortures the data in order to support a desired conclusion. For example, a researcher might look at subsets of the data, discard inconvenient data, or try different model specifications until the desired results are obtained and deemed statistically significant — and therefore publishable. HARKing (Hypothesizing After the Results are Known) occurs when a researcher looks for statistical patterns in a set of data without any well-defined purpose in mind beyond trying to find a pattern that is statistically significant — and therefore publishable. P-hacking and HARKing both lead to the publication of dodgy results that are exposed as dodgy when they are tested with fresh data. This failure to replicate undermines the credibility of published research (and the value of publications in assessing scientific accomplishments).

Even worse than p-hacking and HARKing is complete fabrication. Why torture data or rummage through large databases when you can simply make stuff up? An extreme example is SciGen, a random-word generation program created by three MIT graduate students. Hundreds of papers written entirely or in part by SCIgen have been published in reputable journals that claim they only publish papers that pass rigorous peer review.

More sophisticated cons are the “editing services” (aka, “paper mills”) that some researchers use to buy publishable papers or to buy co-authorship on publishable papers. These fake papers are not created by randomly generated words but they may be entirely fabricated or else plagiarized, in whole or in part, from other papers. It has been estimated that thousands of such papers have been published; it is known that hundreds have been retracted after being identified by research-integrity sleuths.

If anything, Smith notes, the problems will likely get worse because chatbots (large language models or LLMs), introduced only about two years ago, can generate rubbish research papers with far greater efficiency and quality than the methods people complained about five years ago.

It’s not an opinion that science is becoming less trustworthy; it’s an everyday fact, if we go by what we are told about the floods of computer-written junk papers and the problems Smith identifies. And the public’s deepening loss of trust in science is also a fact.

Grounds for Hope

Historically, interest and investment in science — and reliance on it — has waxed and waned. It has increased when people see an actual benefit. But if, over time, “studies show” mainly amounts to a publicity campaign for some project approved by powerful interests, with no practical benefits to recommend it, we can expect public trust to decline further. And blaming the public for not believing what’s not believable is hardly a useful response.

Take heart! There have been periods when science stagnated, then underwent major reforms — usually when it was in a rut. Like now, for many disciplines.


                               

Toward the ultimate design filter?

 Building a Better Definition of Intelligent Design


1. Previous Efforts to Define Intelligent Design

Existing definitions of intelligent design, whatever their imperfections, have been good enough to inspire a growing body of scientific research and philosophical reflection on the role of intelligence in nature. Under these definitions, intelligent design has become an active and fruitful area of inquiry. Even so, I want in this article to lay out why existing definitions, all of which overlap and are roughly congruent, need improvement. And finally, I want to offer a new and improved definition of intelligent design. 

None of the existing definitions of intelligent design is wrong per se. They hit the target, yet they are not squarely in the bullseye. For my colleagues in the field, these definitions haven’t slowed us down. As it is, the best science happens when scientists reflect deeply about problems and creatively invent new ways to think about and resolve them. Such advances occur without scientists obsessively referring to what some textbook definition says about their field of inquiry. 

Definitional change in science is par for the course: As paradigms shift because of scientific advances, textbook definitions change. Compare heat, which earlier had been defined as a weightless, invisible fluid (the caloric theory) and subsequently was defined as the kinetic energy of molecules (the kinetic theory). Even as paradigms are refined rather than replaced, key definitions get refined. Thus the definition of Bohr’s atom gave way to the definition of Dirac’s atom.

In any case, given how controversial it is to look for and claim to find evidence of intelligent activity in nature, especially in regard to cosmological and biological origins, proponents of intelligent design cannot evade what exactly they mean by intelligent design. Can it rightly be regarded as a scientific theory? Is it coherent, hanging together logically? Is it a religious doctrine, or does it merely have religious implications? Should it have traction in education, the public square, and the courtroom? Such questions are widely asked, and their answer depends on how we define intelligent design. 

There is currently no single standard definition of intelligent design held by everyone in the ID community, though all the definitions are quite close in meaning. The one that until recently I used in my public lectures and that served as my working definition of intelligent design is this: Intelligent design is the study of patterns in nature that are best explained as the product of intelligence. Intelligent design is thus about identifying certain special types of patterns or features in nature and showing how they provide compelling scientific evidence of intelligent causation. 

But note, intelligent design is not just about finding evidence of actual design in nature. Once design is confirmed to exist in nature, a raft of research questions confront the design theorist. I list some of these here to underscore that existing definitions of intelligent design have sufficed to spur a full-fledged ID research program. Note that all these questions can be posed and make perfect sense without getting into the intention or identity of any putative designer. In fact, these questions make perfectly good scientific sense even if one adopts a fictionalist view of any designer. But of course there’s nothing here either to stop a realist view of any designer. Here, then, is a partial list of such research questions: 

Classification — What types of natural systems exhibit compelling evidence for design?
Functionality — What are a designed object’s main and subsidiary functions?
Constraints — What are the constraints within which a designed object functions well and outside of which it breaks?
Evolvability — How much can a designed system evolve with and without externally applied information. 
Transmission — How does an object’s design trace back historically? What is the causal narrative by which the object arose? 
Information tracking — What are the informational inputs by which a designed object is produced? What is its ultimate informational source?
Information density — How densely is information nested in a designed object?
Construction — How was a designed object actually constructed?
Reverse-engineering — Absent how a designed object was actually constructed, how could it have been constructed?
Perturbation — How has the original design been modified and what factors have modified it?
Restoration — Once perturbed, how can the original design be recovered?
Optimality — In what way is the design optimal?

I formulated what until recently was my working definition of intelligent design around 2012. Yet my colleagues in the intelligent-design movement and I had been using variations of it since the 1990s. Speaking for myself, around 2000 I would define intelligent design as the study of signs of intelligence. And earlier still I emphasized the empirical detectability of design (via such signs or patterns). 

Rather than chronologically list the various definitions of intelligent design that my colleagues and I have proposed over the years, let me simply underscore some of the key themes in existing definitions. The following list is meant to be representative, not exhaustive. 

Empirical detectability. Design in nature is not a vague intuition about whether something looks to be the product of intelligence. We can know it when we see it.
Triggering features. Certain features reliably trigger design inferences, providing evidence for design. Such features are often described in terms of patterns, information, signs, or signatures.
Irreducible complexity. Introduced by Michael Behe, this has become a key triggering feature, identifying design for a complex system that consists of numerous interrelated parts each necessary for the system’s primary function. 
Specified complexity. Elaborated by myself, this has likewise become a key triggering feature, identifying design when a highly improbable event (complexity) matches a recognizable pattern (specification). 
Origins vs. operations science. Intelligent design distinguishes origins science, which answers historical questions about how features in nature originated, from operations science, which characterizes ongoing processes observable now.
Inference to the best explanation. Inferring design presupposes a playing field of competing explanations, determining whether design is indeed the best explanation on such grounds as empirical support and causal adequacy.
Separation of causes. Intelligent design separates unintelligent or blind causes on the one hand, typically described in terms of chance and necessity, from intelligent or purposive causes on the other, described in terms of design. 
We may therefore think of my 2012 definition of intelligent design (i.e., the study of patterns in nature best explained as the product of intelligence) as a shorthand for all of the above. Crucial here for design theorists is that compelling empirical evidence could in principle exist for design in nature. Design theorists therefore regard intelligent design as a scientific rather than a religious form of inquiry.

Whatever improvements may be made to this definition of intelligent design, the definition as it stands now is in the right ballpark. For most practical purposes, this definition characterizes how we detect intelligence in nature, namely, through intelligence-signifying patterns. Archeology, the search for extraterrestrial intelligence, forensic science, and many other special sciences accord with this definition. 

2. The Blind-Watchmaker Dialectic

Nevertheless, the current standard definition of intelligent design is problematic. Two main problems confront it. First, it provides no guidance or rationale for explaining phenomena that don’t exhibit intelligence-signifying patterns. Second, it fails to distinguish intelligence and design, treating them as synonymous, even though distinguishing between the two is important and needs to figure into any definition of intelligent design.

To see what’s at stake with this first point, consider the following quote from Richard Dawkins’s The Blind Watchmaker: “Biology is the study of complicated things that give the appearance of having been designed for a purpose.” Dawkins might be happy to concede that in disciplines outside biology, patterns may exist that decisively confirm intelligence. Yet Dawkins is convinced that no such patterns exist in biology, except as might have been put there by human or alien bioengineers.

But leaving aside such human or alien bioengineering, Dawkins rejects that any real design is present in biology. Dawkins’ watchmaker is blind, incapable of real design. Natural selection, operating without intelligent guidance, can for him produce all the features of biological systems that give them the appearance of design — but apart from any actual design.

The current standard definition of intelligent design, when confronted with Dawkins’ blind watchmaker argument, thus leads to a problematic dialectic. This dialectic pits intelligent or teleological causes against unintelligent or blind causes. The intelligent causes produce patterns best explained as the product of intelligence. The unintelligent causes, such as natural selection acting on random variations, produce patterns that appear to be designed although their explanation requires no appeal to actual intelligence. 

This dialectic is problematic because it suggests a natural world in which intelligent and unintelligent causes mix indiscriminately, with no principled way of teasing them apart. Our natural tendency is thus to give precedence to one type of cause over the other. Dawkins, for instance, in holding to an atheistic and materialistic view of nature, will see the fundamental causes operating in nature as unintelligent, with intelligence, as it arises in nature, being merely a byproduct of unintelligent causes that produce beings like ourselves through a blind evolutionary process. 

Intelligence is thus for Dawkins downstream of unintelligence, and so there simply cannot be any patterns in nature that point to an intelligence not ultimately reducible to blind natural forces. Intelligent design, especially insofar as it claims to find real intelligent causation behind biology, is thus an impossibility for Dawkins because the only intelligences that exist for him are evolved intelligences, and intelligent design claims to discover unevolved intelligences.

Theism, obviously, provides the most ready alternative to Dawkins’x atheism. For our purposes, we can construe theism quite broadly to include (and I’m not being exhaustive here) pantheism, panentheism, deism, and traditional theism (as in Judaism, Christianity, and Islam). With theism of any stripe, there are no blind or unintelligent causes per se. Even with process theology and open theism, in which chance processes as exhibited in quantum indeterminacy are beyond the full knowledge and control even of God, there is still the sense that God is using randomness in the service of teleology, and so even in these theologies where God is less than omniscient and omnipotent, there are no fully blind causes to speak of.

3.Primary and Secondary Causation

As representative of the theistic response to Dawkins’s blind-watchmaker dialectic, which pits intelligent against unintelligent causes, I want to focus on the Aristotelian-Thomistic distinction between primary and secondary causation. Primary causation denotes the direct action of God, who is seen as the ultimate source of all being and activity in the universe. God, as the first cause, initiates and sustains all existence and causal powers. Primary causation is rooted in Aristotle’s notion of the unmoved mover and further developed by Thomas Aquinas, for whom God’s will and power are the fundamental cause of everything that happens. Divine causation is not just a one-time occurrence but an ongoing, continuous act of creation and sustenance, ensuring that all things remain in existence and function according to their nature.

Secondary causation, on the other hand, denotes the activity of created beings, which operate within the order established by the primary cause. In this view, creatures are genuine causes of effects in the world, but their causative power is derived from and dependent on God’s primary causation. For instance, in lighting a fire, a person acts as a secondary cause, while God’s primary causation ensures the existence and properties of both the person and the fire, as well as the underlying laws of nature that make this activity possible. This distinction allows for a coherent integration of divine omnipotence with the real efficacy of created agents.

The Aristotelian-Thomistic tradition maintains that while God is the ultimate cause of everything, secondary causes play a true and significant role within the divinely created order of the world. This view of secondary causation implies that all cause and effect in the world ultimately aligns with the divine will, in turn implying that there are no truly blind or unintelligent causes, in contradiction to the materialist atheist, who claims that ultimately there are only blind or unintelligent causes. 

Every action performed by secondary causes is therefore, within the Aristotelian-Thomistic tradition, part of God’s purposeful plan for creation. God’s omniscience and omnipotence extend to all creation details, making even seemingly random events part of a divine plan. But note: secondary causation, though instituted by God, is, unlike primary causation, limited in what it can accomplish. Jesus walking on water, turning water into wine, and resurrecting from the dead are beyond the reach of secondary causation. The limits to secondary causation thus make room for miracles, where God’s primary causation intervenes to surpass the capabilities of secondary causation.

Even though this brief overview of the Aristotelian-Thomistic understanding of primary and secondary causation may seem like a digression, it underscores the need for additional clarification in our standard definition of intelligent design. If an object or event exhibits a pattern that is best explained as the product of intelligence, what are we to make of objects or events that don’t exhibit such patterns? 

From an Aristotelian-Thomistic perspective, anything and everything exhibits the divine intelligence. Thus, it would seem that within this perspective, identifying patterns in nature that signify intelligence, as the standard definition of intelligent design would have it, is useless and misleading. The followers of Aristotle and Thomas already know that everything exhibits intelligence, and intelligent design thus seems to offer no additional insight.

Nonetheless, the idea of intelligence-signifying patterns, which is inherent in the current standard definition of intelligent design, has proven itself to have practical value. Did so-and-so die of natural causes or as a result of foul play? Did so-and-so write that essay unassisted or by plagiarizing? Do the marks on that rock result from wind and erosion or the intentional carving of letters (as in Rosetta Stone)? In such examples, there’s an appeal to intelligence/design that seems vastly stronger and more insistent on the one hand than on the other. Even if neither primary nor secondary causes can capture this difference, it’s a difference that needs to be captured. 

To elaborate on this point, consider SETI, the search for extraterrestrial intelligence. SETI researchers look for signs of intelligence from outer space. To date they have found no radio signals that exhibit intelligence-signifying patterns. But now imagine they do find such radio signals, such as technosignatures that can reasonably be ascribed only to technologically advanced civilizations. Most people would describe radio signals that fail to confirm SETI as random, those that do confirm it as designed. Yet even if in some ultimate sense intelligence lies behind both signals, there seems an important distinction to be made here between these two types of signals. Accordingly, if existing definitions of intelligent design fail to adequately capture this distinction, then we need a better definition.

4.Matter vs. Information

In distinguishing between the seemingly random and the clearly nonrandom (as in the examples just considered), Aristotle provides a way forward. He does so through two distinctions of his own, the one between matter and information, the other between nature and design. Let’s start with the first distinction. Matter is raw stuff that can take any number of shapes. Information is what gives shape to matter, fixing one shape to the exclusion of others. Both the words matter and information derive from Latin. Matter (from the Latin noun materia) initially referred to the raw timber used in building houses. Later it came to mean any raw stuff or material with the potential to assume different shapes, forms, or arrangements. Aristotle of course wrote in Greek, and his equivalent for matter was hylē (ὕλη). 

Information (from the Latin verb informare) means to give form or shape to something. Aristotle’s Greek equivalent was the noun morphē (μορφή), to denote form, and the verb morphoō (μορφόω), to denote the activity of forming, shaping, or molding, and thus of informing. Unlike passive or inert matter, which needs to be acted upon, information is active. Information acts on matter to give it its form, shape, arrangement, or structure. 

Note that I’m using terms like form, shape, and arrangement interchangeably. Aristotle would distinguish form, in the sense of substantial form or essence, from mere shape or arrangement. It’s enough for my purposes, however, that shape or arrangement be correlated with form in Aristotle’s sense. Thus, for marble to express the form (in Aristotle’s sense) of Michelangelo’s David, it must be precisely shaped or arranged.

The relation between matter, with its potential to assume any possible shapes, and information, with its restriction of possibilities to a narrow range of shapes, is fundamental to our understanding of the world. Certainly, this relation holds for all human artifacts. This is true not only for human artifacts composed of physical stuff (like marble statues of David), but also for human artifacts composed of more abstract stuff (like poetry and mathematics). 

Indeed, the raw material for many human inventions consists not of physical stuff but of abstract stuff like alphabetic characters, musical notes, and numbers. For instance, the raw material for a Shakespearean sonnet consists of the twenty-six letters of the alphabet. Just as a statue of David is only potential in a slab of marble, so a Shakespearean sonnet is only potential in those twenty-six letters. It takes a Michelangelo to actualize the statue of David, and it takes a Shakespeare to arrange those twenty-six letters appropriately so that one of his sonnets emerges.

The relation between matter and information that we are describing here is old and was understood by the ancient Greeks, especially by the Stoics, who understood God as logos, the active principle that brings order to the cosmos. In any case, nothing said so far about the relation between matter and information is especially controversial. The world consists of raw material waiting to be suitably arranged. On the one hand, there’s matter, passive or inert stuff waiting to be arranged. On the other, there’s information, an active principle or agency that does the arranging. This distinction offers a perfectly straightforward and useful way of carving up experience and making sense of the world. Much of our knowledge of the world depends on understanding this relation between matter and information.

5. Nature vs. Design

In the relation between matter and information, the crucial question is how information gets into matter. For Aristotle, there were two ways to get information into matter: by nature and by design. In the examples considered in the last section, we focused on the activity of a designing intelligence (a sculptor or writer) informing or giving shape to certain raw materials (a slab of marble or letters of the alphabet). But designing intelligences are not the only causal powers capable of structuring matter and thereby imparting information. Nature, too, is capable of structuring matter and imparting information.

Consider the difference between raw pieces of wood and an acorn. Raw pieces of wood do not have the power to assemble themselves into a ship. For raw pieces of wood to form a ship requires a designer to draw up a blueprint and then arrange pieces of wood, in line with the blueprint, into a ship. But where is the designer that causes an acorn to form into a full-grown oak tree? There isn’t any. The acorn has the power to transform itself into an oak tree.

Nature and design therefore represent two different ways of producing information. Nature produces information internally. The acorn assumes the form it does through capacities internal to it — the acorn is a seed programmed to produce an oak tree. On the other hand, a ship assumes the form it does through capacities external to it — a designing intelligence imposes a suitable structure on pieces of wood to form a ship. 

Not only did Aristotle know about the distinction between information and matter, but he also knew about the distinction between design and nature. For him, design consists of capacities external to an object. Design brings about form with outside help. On the other hand, nature consists in powers internal to an object. Nature brings about form without outside help. Thus in Book XII of his Metaphysics Aristotle wrote, “Design is a principle of movement in something other than the thing moved; nature is a principle in the thing itself.” In Book II of his Physics Aristotle referred to design as completing “what nature cannot bring to a finish.” 

The Greek word here translated design is technē (τέχνη), from which we get our word technology. The corresponding Latin is ars/artis, from which we get our words artisan and artifact. In translations of Aristotle’s work, the English word most commonly used to translate technē is art in the sense of artifact. Design, art, and technē are thus synonyms. The essential idea behind these terms is that information is imparted to an object from outside the object, and that the material constituting the object, apart from that outside information, does not have the power to assume the form it does. Thus raw pieces of wood do not by themselves have the power to form a ship.

But what if raw pieces of wood did have such a power of self-organization? In Book II of his Physics Aristotle raised and answered that question: “If the ship-building art were in the wood, it would produce the same results by nature.” In other words, if raw pieces of wood had the capacity to form ships, we would say that ships come about by nature. 

The Greek word here translated “nature” is physis (φύσις), from which we get our word physics. The Indo-European root meaning behind physis is growth and development. Nature produces information not by imposing it from outside but by growing or developing informationally rich structures from within. The acorn is emblematic here. Unlike wood that needs to be fashioned by a designer to form a ship, acorns produce oak trees naturally — the acorn simply needs a suitable environment in which to grow.

In light of Aristotle’s distinction between nature and design, the central question that any science of intelligent design needs to resolve when attempting to explain some system in the natural world is therefore this: Is the system self-sufficient in the sense of possessing within itself all the resources needed (nature) to bring about the information-rich structures it exhibits or does it also require some contribution from outside itself (design) to bring about those structures? 

Aristotle claimed that the art of ship-building is not in the wood that constitutes the ship. We’ve seen that the art of sonnet-composing is not in the letters of the alphabet. Likewise, the art of statue-making is not in the stone out of which statues are made. Each of these cases requires a designer. A successful science of intelligent design would demonstrate that the art of building certain information-rich structures in nature (such as biological organisms) is not in the physical stuff that constitutes these structures but requires the input of information from outside the system.

6. The Connection Between Intelligence and Information

Up to now, we’ve only discussed the classical conception of information as developed by Aristotle. The modern conception of information overlaps with Aristotle’s, but it is better adapted to contemporary science and mathematics. Also, it comes without a full-blown metaphysics. The modern conception is drawn from Shannon’s communication theory and subsequent work on the mathematical theory of information. The key idea underlying this conception of information is the narrowing of possibilities. Specifically, the more that possibilities are narrowed down, the greater the information.

For instance, if I tell you I’m on planet earth, I haven’t conveyed any information because you already knew that (let’s leave aside space travel). If I tell you I’m in the United States, I’ve begun to narrow down where I am in the world. If I tell you I’m in Texas, I’ve narrowed down my location further. If I tell you I’m forty miles north of Dallas, I’ve narrowed my location down even further. As I keep narrowing down my location, I’m providing you with more and more information.

Information is therefore always exclusionary: the more possibilities are excluded, the greater the information provided. As philosopher Robert Stalnaker (Inquiry, p. 85) put it: “To learn something, to acquire information, is to rule out possibilities. To understand the information conveyed in a communication is to know what possibilities would be excluded by its truth.” I’m excluding much more of the world when I say I’m in Texas forty miles north of Dallas than when I say I’m merely in the United States. Accordingly, to say I’m in Texas north of Dallas conveys much more information than simply to say I’m in the United States.

The etymology of the word information captures this exclusionary understanding of information. We already discussed its etymology in section 3 on the Aristotelian relation between matter and form. To elaborate on it further, the word information derives from the Latin preposition in, meaning in or into, and the verb formare, meaning to give shape to. Information puts definite shape into something. But that means ruling out other shapes. Information even in its classical conception thus narrows down the shape in question. A completely unformed shmoo, such as Aristotle’s prime matter, is waiting in limbo to receive information. Only by being informed will it exhibit a definite structure.

Aristotle’s conception of information overlaps with but is also separate from the modern conception of information. Aristotle’s conception, as we saw in section 3, is tied to his theory of formal causation, in which information is understood as the cause that gives shape to matter and makes a material object what it is. In Aristotelian thought, the formal cause determines an object’s structure and properties, defining its essence.

For Aristotle, information was thus more than a narrowing of possibilities. Instead it was an intrinsic organizing principle that turns matter into a coherent and purposeful entity. Yet, the modern conception of information, though not wedded to Aristotle’s understanding of formal causation, is nonetheless consistent with it. Aristotelian information, by defining a thing’s essence, makes it this and not that. It is thus inherently exclusionary, which aligns with information in its contemporary sense as the narrowing down of possibilities.

Let’s next turn to intelligence. The fundamental intuition of information as narrowing down possibilities matches neatly with the concept of intelligence. The word intelligence derives from two Latin words: the preposition inter, meaning between, and the verb legere, meaning to choose. Intelligence thus, at its most fundamental, signifies the ability to choose between. But when a choice is made, some possibilities are actualized to the exclusion of others, implying a narrowing of possibilities. And so, an act of intelligence is also an act of information.

If we trace the etymology of intelligent back still further, the l-i-g that appears in it derives from the Indo-European root l-e-g. This root appears in the Greek verb lego, which by New Testament times meant to speak. Its original Indo-European meaning, however, was to lay, and from there to pick up and put together. Still later, it came to mean to choose and arrange words, and from there to speak. The root l-e-g has several variants, appearing as l-o-g in logos and as l-e-c in intellect and select. 

As a side note, this brief etymological study reveals that Darwin’s great coup was to coopt the term selection, previously associated with the conscious choice of purposive agents, and saddle it with the term natural. In the term natural selection, Darwin therefore intended to recover all the benefits of choice as traditionally conceived, and yet without requiring the services of an actual intelligence. Thus to this day we read such claims, as by Francisco Ayala, that Darwin’s greatest discovery was to give us “design without designer,” which Dawkins described as the appearance of design without actual design.

Darwinists, in coopting the term selection, obfuscate the idea of choice. Choice is a directed contingency that actualizes some possibilities to the exclusion of others in order to accomplish an end or purpose. A synonym for the word choice is decision, with the corresponding verb forms being choose and decide. The words decision and decide are likewise from the Latin, combining the preposition de, meaning down from, and the verb caedere, meaning to cut off or kill (compare our English word homicide). 

Decisions, in keeping with this etymology, raise up some possibilities by cutting down, or killing off, others. When you decide to marry one person, you cut off all the other people you might marry (assume the marital relationship is one-to-one). An act of decision is therefore always a narrowing of possibilities. It is an informational act. But given the definition of intelligence as choosing between, it is also an intelligent act.

7. A New Information-Based Definition of Intelligent Design

would otherwise need to be externally inputted. All the necessary information is thus said to reside in the environment (whether front-loaded or self-generated). The environment thus becomes an unlimited source of information that dispenses with all need for design.

I call this maneuver of expanding a system so that it coincides with an informationally plenipotent environment the environmental fallacy. It is a fallacy because 

it illegitimately discounts the integrity of systems, which must be considered on their own terms and which may not be absorbed willy-nilly into larger supersystems simply to avoid the problem of design; and 
it simply presupposes that the environment always has sufficient informational resources to defeat design rather than that the environment always needs its actual internally-generated informational resources accurately assessed to determine whether they are in fact adequate to defeat design and, if not, to allow for a valid inference to design. 
The choice of system to analyze for evidence of design typically adheres to a Goldilocks principle: it needs to be not too big, and not too small, but just right, where just right means that the system allows for an accurate assessment of whether the information output in question is indeed internally generated or the result of externally applied, intelligently sourced information (design). The key types of systems in biology that give evidence of design in this sense are those that exhibit irreducible and specified complexity.

Capacities. A key term in this new definition of intelligent design is capacities. This term refers to the causal powers of systems to produce certain effects or outputs. Systems are able to do certain things but not others. An otherwise functional car with an internal combustion engine but without gas does not have the capacity to drive; with gas, it does. Aristotle understood capacities in terms of his distinction between potentiality and actuality. This distinction fit within his metaphysics for characterizing how entities undergo change. Yet for the sake of our present definition of intelligent design, we only need a conception of capacity that takes causal powers seriously. Aristotle certainly qualifies here, but other approaches do too, such as scientific realism. 

Philosopher of science Nancy Cartwright articulated a conception of capacities that is congenial to our newfound definition of intelligent design. She did this in her book Nature’s Capacities and Their Measurement (Oxford, 1989). There she contended that scientific laws and observed regularities are not merely descriptions of passive events but are underpinned by capacities that can manifest differently depending on context. Cartwright challenged the view that the laws of nature are universally applicable without exception, proposing instead that these laws describe tendencies that are actualized when the relevant capacities are triggered in the appropriate circumstances. For the purposes of this new definition of intelligent design, Cartwright’s view of capacities elucidates causal powers for systems and how systems interact to produce observed phenomena.Chance and Probability. The terms chance and probability do not appear in this definition of intelligent design, but they are there implicitly. Capacities, understood as causal powers, can be described scientifically/mathematically in terms of chance and probability. Thus, to say that a system has the capacity to produce a given output is to say that the system, left to itself, will with high probability produce the output. Alternatively, to say that a system does not have the capacity to produce a given output is to say that the system, except with external input, will with low probability produce the output.

In such a probabilistic approach to capacities, chance then simply describes a system’s probabilistic behavior in producing given outputs. As such, chance says nothing about whether the underlying causal processes are teleological or ateleological. This approach to chance is compatible with Aristotle’s view that all causality is ultimately teleological (chance for him being the incidental collision of independent causal chains, all of which are teleological). But this approach to chance is also compatible with Jacques Monod’s view (in Chance and Necessity) that all causality is ultimately ateleological. Chance, as implied in this new definition of intelligent design, is then simply a non-prejudicial way of describing the probabilistic behavior of a system. 

Intelligent actions are clearly responsible for the chance behavior of some systems. Take, for instance, high school seniors looking to go to college next fall. All the decisions by prospective students to apply to colleges as well as all the decisions by the college admission committees to accept or reject their applications are under full conscious intelligent control. Yet well-defined probability distributions characterize application numbers as well as acceptance and rejection numbers for given schools (Caltech and Harvard currently being the most competitive). 

Note that the use of probabilities to trace causal relationships is well established. Patrick Suppes, Nancy Cartwright, and Judea Pearl have all made compelling arguments for how to get causes from probabilities. The canard “correlation is not causation” is overworked and too often cloaks a self-imposed ignorance. As Judea Pearl convincingly argues in The Book of Why, it is entirely rational to assert that we know the cause of something using probabilistic/statistical arguments that sift both supporting evidence and contrary evidence. 

Probabilistic causality is understood in the first instance through probabilistic dependence. At its most basic, for A to be a cause of B, there must be a probabilistic dependence between them. Specifically, the occurrence of A should increase the probability of B occurring. Formally, P(B∣A) > P(B). This idea can be developed further using causal diagrams, counterfactual analyses, and Bayesian reasoning. But the point to note in connection with our new definition of intelligent design is that the capacities of systems can be modeled probabilistically, as can changes in the capacities of systems through the infusion of novel external information.

Information. Information figures prominently in this definition. It is there not as a metaphor but as a real entity capable of being measured through the tools of the modern mathematical theory of information. Where earlier definitions of intelligent design emphasized intelligence-signifying patterns that are best explained as the product of intelligence, the new definition emphasizes informational outputs that are best explained by prior externally applied informational inputs arising from intelligence, the outputs and inputs being associated with particular systems. The references to patterns and information in the earlier and later definitions of intelligent design are entirely parallel.

The mathematician Norbert Wiener, in his book Cybernetics, remarked that “information is information, not matter or energy.” It’s important to keep this point in mind when working with the new definition of intelligent design. The capacities of many systems are gauged in terms of energy and matter. Earlier, we considered the example of a car with an internal combustion engine that lacked gas in the tank. Such a car lacks the capacity to move itself. Yet, once the tank is filled with gas, it will have the capacity to move. The input here (gas in the tank) that explains the output here (the car being able to move) is, however, not informational but energetic.

The focus in our new definition of intelligent design is squarely on information. Information may require some energetic involvement. For instance, an old-fashioned transistor radio does not have the capacity by itself to play a recorded musical performance. Instead, it requires a signal encoding that performance to be transmitted to the radio. That signal will use energy, but it will be a directed energy that is also a carrier of information. Such inputted information will be best explained as intelligently inputted external information (i.e., design). 

But note, a contemporary digital radio might have a memory unit that contains mp3 files of recorded musical performances. Such a radio, unlike an old-fashioned transistor radio, might therefore have the capacity by itself to play music without external informational input, the music being stored on a memory chip in the radio. This example underscores the need to determine what the actual capacities of systems are whose design stands in question. 

Although in many instances the external application of information by intelligence involves energy, we need to avoid making observed energetic pathways a precondition for intelligently inputted external information. Informational relationships do not require energetic relationships. As Fred Dretske remarked in Knowledge and the Flow of Information (MIT, 1981, p. 26):
                      It may seem as though the transmission of information … is a process that depends on the causal inter-relatedness [think physical causality in terms of energy] of source and receiver. The way one gets a message from s [source] to r [receiver] is by initiating a sequence of events at s that culminates in a corresponding sequence at r. In abstract terms, the message is borne from s to r by a causal process which determines what happens at r in terms of what happens at s. The flow of information may, and in most familiar instances obviously does, depend on underlying causal processes [again, think physical causality and energy]. Nevertheless, the information relationships between s and r must be distinguished from the system of causal relationships [again, think energy] existing between these points.

The key takeaway here for our new definition of intelligent design is that informational relationships take precedence over energetic relationships. We can know, for instance, that a “magic” penny whose coin flips spell out the cure for cancer in Unicode (1 for heads, 0 for tails) is under intelligent external control. Indeed, systems composed of pennies flipped by humans have no capacity to produce meaningful communications, to say nothing of groundbreaking medical advances. The penny here is tapping into a source of information outside itself.

Nor does it matter if no chain of physical causation involving matter and energy can be found, or even exists, to account for the information outputted by the “magic” penny. The design in the “magic” penny’s output is clear. In particular, naturalistic assumptions that try to deny external informational input to the penny for lack of known physical processes capable of accounting for the information need to be rejected. Naturalism, whether in its methodological or metaphysical guise, is not a valid constraint on our new definition of intelligent design.

In conclusion, not everything is designed, but everything could ultimately be the result of intelligence. Both these claims are true. Previous definitions of intelligent design, however, have made it difficult to maintain both these claims without contradiction or confusion. The new definition of intelligent design given in this article allows both these claims to be maintained while at the same time fostering a robust understanding of intelligent design that is scientifically fruitful and philosophically defensible. 

Acknowledgments

The immediate impetus for this article was an unpublished typescript that Jay Richards circulated among his ID colleagues. It was titled “Why We Should Not Concede ‘Blind and Undirected Natural Processes’.” It suggested that the contrast class to design in a design inference should not be regarded as ateleological (i.e., as blind and undirected causes). Jay’s point was that allowing ateleological causes conceded too much ground to naturalists and too little to Aristotelians and Thomists, the latter then being left with a conception of intelligent design incompatible with their metaphysics, and thus with a compelling reason to reject intelligent design. 

I’ve long been aware of this concern. More than twenty years ago, I had even made a partial attempt to render intelligent design compatible with the Aristotelian-Thomist tradition. This I did in December 2001 when I gave an address at a AAAS meeting at Haverford College. My talk was titled “ID as a Theory of Technological Evolution,” and its opening line read “In Book II of the Physics Aristotle remarks, ‘If the ship-building art were in the wood, it would produce the same results by nature’.” A few years later I developed this Aristotelian approach to ID further in a book chapter titled “An Information-Theoretic Design Argument,” which appeared in the Beckwith et al. anthology, To Everyone an Answer: A Case for the Christian Worldview (IVP, 2004). The present article drew heavily from that chapter. 

Spurred by Jay’s typescript and aware that I had, though only partially, addressed his concerns in the past, I might have let the weeks and months slip away before taking up his concerns in earnest. But at the same time that I received Jay’s typescript, I was on my way to São Paulo for the big annual Brazilian intelligent design conference (June 28-30, 2024 — thank you Marcos Eberlin for the invitation!). I wanted to have something new to share with my Brazilian ID colleagues, so I decided to revisit the definition of intelligent design and see how I would need to adjust it to accommodate an Aristotelian-Thomistic metaphysics in which all causality is ultimately teleological. 

After some reflection, it became clear to me that such an accommodation could readily be accomplished while preserving everything of importance in intelligent design. More so, the new definition seemed to strengthen both the scientific and the philosophical underpinnings of intelligent design. I shared a “beta version” of that new definition at the Brazilian ID conference. The present article is the more mature fruit of my reflection. It draws on my two-decades old work on relating ID and Aristotle. In section 5, it also rehearses recent work of mine relating intelligence and information. Regardless of whether this new definition is the last word on defining intelligent design, in my view it represents a significant advance in clarifying intelligent design and strengthening its hand in scientific and philosophical discussions.