Search This Blog

Thursday 4 April 2024

Engineerless engineering is a thing?

 Design Without a Designer? New Book Says Yes!


The more we learn about living systems, the harder they are to explain without invoking teleology — purpose, planning, goal. If an intelligent designer is off the table, this creates a dilemma for some. 

Wouldn’t it be great if you could have your cake and eat it too — have design, without a designer? In 2023, MIT Press released an edited volume of papers by prominent biologists and philosophers of science titled Evolution “On Purpose”: Teleonomy in Living Systems. The purpose of the volume is to promote the theory of “teleonomy.” Teleonomy is “internal teleology” — goal-directedness that comes from within a system, not from outside. Under this theory, there need be no God (or aliens, or Platonic or Aristotelian forms, or anything of the sort) guiding the development of living systems; the living systems themselves set the goals.

The “Unspoken” Inference 

Biologist Peter Corning, one of the editors of the volume, writes: 

The evolution of humankind is undoubtedly the most striking example of how teleonomy has exerted a shaping influence in biological evolution, but a case can be made that teleonomy was also involved in many of the great turning points and transitions in the history of life on Earth, including the earliest colonization of the seafloor, the emergence of the eukaryotes, the migration of life forms from the oceans onto the land, the rise of multicellular organisms, the development of land plants and trees, the origin of fish, birds, and mammals, the invention of social organization, the division of labor (task specialization), and more. 

Teleonomy is also an implicit (though unspoken) influence in connection with many other familiar terms, I would argue, including “symbiogenesis,” “organic selection theory,” evolutionary “pacemakers,” the “Baldwin effect,” “major transitions theory,” “niche construction theory,” “gene-culture coevolution theory,” “natural genetic engineering,” many examples of “semiosis,” and, recently, the concept of “agency” in evolution. These terms all suggest the role of purposive behavior. A radically different view of evolution has been emerging in this century. We now know that living systems actively shape their own evolution, in various ways.

In other words, Corning is saying that all sorts of evolutionary theories contain the hidden assumption of purposiveness, i.e., design. This is an important admission, since it’s what ID theorists have been saying. 

Of course, he differs on where this design comes from. But it’s worth noting that the thesis of teleonomy implicitly acknowledges the validity of the design inference. If you can infer design in nature, you can infer design in nature. Period. Then you can decide whether it comes from within or from without.

That means that if the teleonomic explanation (“living systems actively shape their own evolution”) doesn’t hold up, the old alternative hypothesis will be there, waiting. 

Is Teleonomy a Good Explanation? 

So, does the teleonomic explanation hold up? Well, we have to ask: where does “teleonomy” come from? Why does it exist? 

The answer, according to Evolution “On Purpose”, is that it come from… drum roll… evolution. In addition to causing evolution. 

The term “teleonomy,” Corning writes, was coined “to draw a contrast between an ‘external’ teleology (Aristotelian or religious) and the ‘internal’ purposiveness and goal-directedness of living systems, which are products of the evolutionary process and of natural selection.” However, teleonomy is “not simply a product of natural selection. It is also an important cause of natural selection and has been a major shaping influence over time in biological evolution.” Conversely, natural selection “has been both a cause of this purposiveness and an outcome.”

This is not, in itself, illogical. You could have two forces at work — purpose and natural selection — that synergistically encourage each other, in a sort of positive feedback loop. But then, you still have to explain how the feedback loop got started. 

Imagine that someone asks an evolutionary biologist where chickens came from. 

“Eggs,” the scientist replies. 

“Where did eggs come from?” his interlocuter replies. 

“Chickens!” says the scientist. 

The problem with this explanation is not that it is false. As it happens, it is quite true. The problem is that it fails to explain. It does not answer the question that was really being asked.

Likewise, “teleonomy” fails to explain. The design of nature requires an explanation, an ultimate explanation. Rather than explain, invoking “teleonomy” just dodges the question. If we say that natural selection and random variation cannot explain something, evolutionary biologists can say, “Well, it’s not random variation, it’s goal-oriented.” If we ask where the goal-oriented-ness itself came from, they will say “natural selection.” The question returns to where it began; a final cause for the existence of design in nature has yet to be proposed.

Avoiding the Question

I suspect it will never be proposed, because the point is to sweep the problem under the rug by obscuring it in a complexity of causes. The theory of teleonomy does not address — is not even in dialogue with — the arguments of, say, Michael Behe or William Dembski that unguided processes simply cannot generate novel information or irreducibly complex systems. But it does make it harder to apply those arguments, because there is nothing concrete to discuss. We are not talking about a bacterial flagellum, or an eye, or even a brain — we are talking about a vague internal “purposiveness.” This purposiveness, if it exists and is not supernatural, would have to arise from some organized and complex system. But the exact nature of that system is hidden somewhere in an endless chain of “purposiveness caused by natural selection caused by purposiveness caused by natural selection…” going back who knows how far.

In future posts, I plan to discuss some of the specific mechanisms for evolution proposed in the Evolution “On Purpose”anthology. However, this is the basic problem that underlies the whole endeavor. At the end of the day, ordered complexity requires either extreme luck or intentional planning. The idea that life itself did this planning may sound like a clever work-around, but in the end it’s no better than the idea of a god who created himself. 

Nothing can create itself. Everything has a cause, until you get back to some eternal First Cause. Any attempt to avoid that logical destination is just stalling. 

Information is in the mind of the informed?

 The Connection Between Intelligence and Information


The key intuition behind the concept of information is the narrowing of possibilities. The more that possibilities are narrowed down, the greater the information. If I tell you I’m on planet Earth, I haven’t conveyed any information because you already knew that (let’s leave aside space travel). If I tell you I’m in the United States, I’ve begun to narrow down where I am in the world. If I tell you I’m in Texas, I’ve narrowed down my location further. If I tell you I’m forty miles north of Dallas, I’ve narrowed my location down even further. As I keep narrowing down my location, I’m providing you with more and more information.

Information is therefore, in its essence, exclusionary: the more possibilities are excluded, the greater the information provided. As philosopher Robert Stalnaker put it in his book Inquiry: “To learn something, to acquire information, is to rule out possibilities. To understand the information conveyed in a communication is to know what possibilities would be excluded by its truth.” I’m excluding much more of the world when I say I’m in Texas forty miles north of Dallas as opposed to when I say I’m merely in the United States. Accordingly, to say I’m in Texas north of Dallas conveys much more information than simply to say I’m in the United States.

An Exclusionary Understanding

The etymology of the word information is congruent with this exclusionary understanding of information. The word information derives from the Latin preposition in, meaning in or into, and the verb formare, meaning to give shape to. Information puts definite shape into something. But that means ruling out other shapes. Information narrows down the shape in question. A completely unformed shmoo is waiting in limbo to receive information. But until it is given definite shape, it exhibits no information.

The fundamental intuition of information as narrowing down possibilities matches up neatly with the concept of intelligence. The word intelligence derives from two Latin words: the preposition inter, meaning between, and the verb legere, meaning to choose. Intelligence thus, at its most fundamental, signifies the ability to choose between. But when a choice is made, some possibilities are actualized to the exclusion of others, implying the narrowing of possibilities. And so, an act of intelligence is also an act of information.

A Narrowing of Possibilities

A synonym for the word choose is decide. This last word is likewise from the Latin, combining the preposition de, meaning down from, and the verb caedere, meaning to cut off or kill (compare our English word homicide). Decisions, in keeping with this etymology, raise up some possibilities by cutting down, or killing off, others. When you decide to marry one person, you cut off all the other people you might marry. An act of decision is therefore always a narrowing of possibilities. It is an informational act. But given the definition of intelligence as choosing between, it is also an intelligent act.

Given the etymology of information and intelligence, it’s obvious that the two are related notions. The million dollar question in connecting the two is how we can know when an intelligence is actually responsible for an item of information. Information can happen naturally — a rock falls naturally here rather than there. But information can also happen intelligently — a rock may be put deliberately here rather than there. So how do we tell the difference? 

Answering that question is the whole point of specified complexity and the design inference. If you’ve got the time and inclination to probe this question deeply, get the book: William A. Dembski and Winston Ewert, The Design Inference, 2nd edition. Otherwise, stay tuned here — I’ll be providing a user-friendly synopsis of how to know when an intelligence is responsible for information.

Postscript

The featured image here may look like a random inkblot, but it’s not. Many people don’t at first see what’s there. Once they see it, they know that the information there is the product of intelligence. But until then, they would be within their rights to think that it’s just a random naturally-formed inkblot.

Wednesday 3 April 2024

Tangible evidence for design?

 Sense of Touch Is More Finely Tuned than We Thought


“Reach out and touch someone.” Some may remember that old TV commercial. Bell Telephone appealed to the human need for communication to grow its business, implying that a phone call was the next best thing to a hug or handshake. At a scale five orders of magnitude smaller, cells also like to reach out and touch their neighbors. They respond not with ears and fingers, but with channels that open on contact, making intercellular communication come alive.

In a previous article about active transport and selectivity filters, we marveled at the precision alignment of amino acid residues in the CFTR channel that employ electrostatic forces to authenticate chloride ions passing through a narrow “selectivity filter” required for entry. CFTR channels remain open all the time for their chloride ion customers. Others require a touch, like the push of a button on a vending machine, to activate. 

A Biological Piezoelectric Effect

One such channel has an interesting name, Piezo2, reminiscent of the piezoelectric effect in physics where applying mechanical stress to certain materials generates electricity. You may have seen a demonstration of this effect when a physics teacher hit a quartz rock with a hammer and generated sparks. In a related way but with different physics, Piezo channels are touch sensitive, and indeed are crucial for our sense of touch.

We have numerous Piezo1 channels in our skin, which respond on contact by opening to let Ca2+ ions flood into the cell, triggering neural signals interpreted by the brain as touch. Piezo2-deficiency syndrome, caused by mutations in the PIEZO2 gene, manifests as decreased touch sensation and proprioception, leading to difficulty walking and loss of coordination. The Piezo2 channel has a curious shape, with a dome of three curved arms that look like propeller blades.

News from the Max Delbrück Center adds a partner to Piezo2. 

Every hug, every handshake, every dexterous act engages and requires touch perception. Therefore, it is essential to understand the molecular basis of touch. “Until now, we had known that the ion channel — Piezo2 — is required for touch perception, but it was clear that this protein alone cannot explain the entirety of touch sensation,” says Professor Gary Lewin, head of the Molecular Physiology of Somatic Sensation Lab at the Max Delbrück Center.

For over 20 years Lewin has been studying the molecular basis of the sensation of touch. He and his team have now discovered a new ion channel, named Elkin1, that plays a vital role in touch perception. This is only the second ion channel implicated in the touch perception.

Like other ion channels, Elkin1 is anything but simple. It contains 7 transmembrane proteins with a well-defined structure and selectivity filter. Lewin’s team, who published their findings in Science, first noticed that mice without functional Elkin1 often had reduced touch sensitivity. Then they checked to see if the two mechanically activated (MA) channels cooperated. Strangely, they did not — at least directly. Elkin1, instead, interacts with StomL3, a modulator of Piezo2 sensitivity. Further tests revealed a cooperative role in these three proteins that permits response to low-threshold mechanoreceptors (LTMRs).

Our data support a model in which ELKIN1 and PIEZO2 channels share roles in sensory mechanotransduction in LTMRs and in which both channels can be modulated by STOML3. There is evidence that STOML3 can also modulate MA currents in nociceptors, which is consistent with a role for ELKIN1 in conferring robustness to the C-fiber responses to force. The identification of ELKIN1 as a mechanically gated ion channel necessary for somatosensory function increases our understanding of the entirety of touch transduction.

Cooperation between these three actors gives an animal a wide range of touch sensitivity, from a quick light touch to constant pressure at the point of pain. The take-home lesson is that the sense of touch now looks more complex and more finely tuned than thought. One mechanoreceptor is not enough for exquisite responses to touch, whether it be a hug, handshake, or dexterous act.

Touch-Sensitive Tissue Repairmen

An open access paper by a team from Yale in Science Advances tells about another discovery in mechanosensation. Macrophages, part of the immune system, reside in the extracellular matrix of many tissues. When they sense a disturbance in the force, they slither about like amoebas to the site of repair. Having DNA credentials, they can also signal the nucleus to send reinforcements.

Tissue-resident macrophages play important roles in tissue homeostasis and repair. However, how macrophages monitor and maintain tissue integrity is not well understood. The extracellular matrix (ECM) is a key structural and organizational component of all tissues. Here, we find that macrophages sense the mechanical properties of the ECM to regulate a specific tissue repair program. We show that macrophage mechanosensing is mediated by cytoskeletal remodeling and can be performed in three-dimensional environments through a noncanonical, integrin-independent mechanism analogous to amoeboid migration.We find that these cytoskeletal dynamics also integrate biochemical signaling by colony-stimulating factor 1 and ultimately regulate chromatin accessibility to control the mechanosensitive gene expression program. This study identifies an “amoeboid” mode of ECM mechanosensing through which macrophages may regulate tissue repair and fibrosis.

Lysosomes: Organelles with Mechanosensitive Channels

Not all mechanosensitive channels reside on the external lipid membranes of cells. Here’s one on the membrane of an important organelle: the lysosome. Li et al., publishing in Nature (open access), explored a protein named TMEM63 that works in a mechanosensitive channel on the membranes of lysosomes. Erika Reiderer and Dejian Ren, commenting on this paper in the same Nature issue, describe the lysosome as “a vital organelle with an acidic pH that digests and recycles cellular materials thanks to more than 50 digestive enzymes and many transporters.” Now, one of those parts turns out to be an intercellular mechanosensitive ion channel.

Because lysosomes are embedded in signaling networks with other organelles, it makes sense that they often feel the need to reach out and touch someone. The busy interior of a cell makes contacts unavoidable and frequent. A figure in the commentary shows mechanical stimuli impinging on the lysosome’s membrane in various ways. The TMEM3 channels interact with signals from other organelles such as mitochondria, peroxisomes, the endoplasmic reticulum (ER) via their tethering proteins, effectors and transporters; microtubules being carried by motor proteins; endosomes coming in from the exterior; nutrient sensors via the mTORC1 pathway; and possibly mechanical signals from the V-ATP rotary motors embedded in the lysosomal membrane. (V-ATPases, by the way, rotate similarly to ATP synthase, but hydrolyze ATP for protons to acidify the interior of the lysosome.) A politician could hardly shake more hands than these contact-sensitive TMEM63 channels do constantly!

Li’s team was able to measure electrical currents in these TMEM63 channels, which is truly remarkable, given that they were measuring conductance on the membranes of tiny organelles in response to mechanical forces inside the cells of fruit flies! They even measured the pressure that triggered the responses. What amazing times we live in, where such measurements are possible, and we can image the molecular machines themselves. The team also investigated comparable channels named TMEM63A in mice, one of three mammalian counterparts found in our bodies, too. No mention was made of evolution, other than to note that all these homologues are “evolutionarily conserved” — i.e., unevolved.

Reiderer and Ren consider this a groundbreaking discovery ripe for more research. 

The Li et al. study opens a new frontier in lysosomal physiology. As with many other groundbreaking discoveries, it also prompts more questions than answers. How is lysosomal TMEM63 opened by mechanical force? Does it functionally or physically interact with other, better-known lysosomal channels to coordinate lysosomal physiology and cellular signalling? How does a mechanosensing channel regulate a lysosomal function as basic as substrate digestion? Is the channel also regulated by organelle membrane lipids and extracellular cues, such as nutrients and growth factors? Finally, the mechanisms for sensing mechanical forceby plasma membranes are — somewhat annoyingly to physiologists — highly diverse between cells and across species. Are the mechanisms used by lysosomes more uniform? With the newly found role of TMEM63, it is hoped that these questions can be answered shortly.

Answers will come from engineers specializing in biophysics. It’s the kind of research favorable to ID, where scientists investigate a phenomenon on the assumption that if something exists and is working, it has a purpose.

A great physicist does not = a great atheist?

 My Dinner with Steven and Louise Weinberg


In the late 1990s, a number of us had dinner with Steven Weinberg and his wife Louise at a restaurant in Austin, Texas. Weinberg as well as his wife were on faculty there at the University of Texas, he in physics, she in law. Another faculty member had arranged a public exchange at the university between Weinberg and Phil Johnson, Johnson at the time being a key figure in challenging naturalism and advancing intelligent design. After the exchange, we went to dinner.

A Towering Figure

Weinberg, who died in 2021, was a towering figure in 20th-century physics. Yes, he had a Nobel Prize. But he was a big deal even among fellow Nobel Prize winners. Besides a stupendous record of achievement in physics, he was also a great writer whose popular treatments of science, such as The First Three Minutes, sold well and made him not just a physicist but also a public intellectual.

I remember two things especially about the dinner. Weinberg was very smart and let you know it. He was also an outspoken atheist, and it would have taken courage to confront him on his atheism (which Phil Johnson did). But it was interesting at the dinner to hear from Weinberg a key reason why he rejected theism, namely, he knew Christians physicists who hesitated to share with him their faith. As he put it, if Christianity is true and Jesus is so important, why don’t they talk to me about their faith? 

I’m not sure how much stock to put in Weinberg’s objection. The faculty member who set up the exchange with Johnson and then the dinner wasn’t hiding his faith, nor was Johnson. But Weinberg’s objection raises an interesting point about how much faith goes unshared because we are intimidated by the people with whom we might otherwise share our faith. The consistent testimony of Scripture is that we should fear God and not man.

Not a Great Atheist

The high point of the dinner for me, however, came from his wife Louise. Weinberg was holding court, going on about how much he knew about the origin of the universe and how atheism was the only intellectually viable option. Louise turned to the table and remarked that scientists like Weinberg, who are accomplished in one area, think they speak with authority in every area, but they can’t. 

It was like air being let out of a tire. For all of Weinberg’s bluster and confidence, it was heartwarming to see him meekly accept his wife’s correction. As an attorney and law professor, Louise knew that there are no universal expert witnesses in a court of law. Weinberg was a great physicist. But he wasn’t a great atheist. And I say this to his credit.

I personally suspect that Weinberg had doubts about his atheism, and that his atheism was more a pose than a reflection of his heart. One reason I say this is that Weinberg seemed always ready to engage theists. A few years after the dinner, I helped organize a conference at Baylor titled “The Nature of Nature.” Weinberg agreed to be a speaker at the conference if we would provide a limo for the 100 mile trip from Austin to Waco. Of course, we obliged. It was an honor to have him.


Can our AI overlords be trusted.

Truth and Trust in Large Language Models


The trust we put in Large Language Models (LLMs) ought to depend on their truthfulness. So how truthful are LLMs? For many routine queries, they seem accurate enough. What’s the capital of North Dakota? To this query, ChatGPT4 just now gave me the answer Bismarck. That’s right.

But what about less routine queries? Recently I was exploring the use of design inferences to detect plagiarism and data falsification. Some big academic misconduct cases had in the last 12 months gotten widespread public attention, not least the plagiarism scandal of Harvard president Claudine Gay and the data falsification scandal of Stanford president Marc Tessier-Lavigne. These scandals were so damaging to these individuals and their institutions that neither is a university president any longer. 

When I queried ChatGPT4 to produce 25 cases of academic research misconduct since 2000 (as part of my project to understand how design inferences might help preserve academic integrity), seven of those accused of academic misconduct either were plainly innocent or could not reasonably be charged with misconduct for lack of evidence. In one case, the person charged by ChatGPT4 had actually charged another academic with misconduct. It was as though ChatGPT4 in this instance could not distinguish between someone being charged with misconduct and someone issuing a charge of misconduct.

Hallucinations

Ever since LLMs took the world by storm in late 2022, I’ve attempted to put them through their paces. They do some things well. I find them a valuable assistant. But they can also be misleading to the point of deception. Not that these systems have the volitional intent to deceive. But if we treated them as humans, they could rightly be regarded as deceptive. Anyone who has worked with LLMs has learned a new meaning for the word “hallucinate.” That’s what LLMs do when they make stuff up. 

I’ve witnessed plenty of LLM hallucinations first hand, such as false accusations of academic misconduct. Let me offer two additional salient examples in my experience. First, I was helping with a Halloween story for an educational website. The story was to list the “scariest buildings on college campuses.” I had ChatGPT4 pull together a list of forty or so such campus buildings along with a description of what made each scary. With building after scary building I kept reading that it was “gothic.” Eventually I said to myself, “Wait a minute. There aren’t that many gothic buildings on college campuses.” True enough, many buildings on the list said to be gothic were anything but gothic. The LLM I queried had been trained to associate scary with gothic, and so it just kept describing these buildings as gothic — regardless of their actual architecture.

The other example involved a search for quotes from prominent biologists extolling the beauty, elegance, and sophistication of biological systems. I wanted such quotes in responding to dysteleological arguments by Darwinian biologists claiming that biological systems are jury-rigged Rube Goldberg devices that give no evidence of exquisite design. So I queried an LLM for quotes extolling biological design. I got plenty such quotes and they were assigned to prominent biologists. I also got some article references from which those quotes were supposedly taken. But it was all made up — the quotes, the references, and the assignment of biologists to those quotes (the biologists themselves were real). 

Practical Advice

The obvious lesson here for LLMs is, Verify first and only then trust. This advice makes good practical sense. In particular, it helps prevent the embarrassment of reproducing hallucinated content from LLMs. It also makes good legal sense. The following from a March 29, 2024, Wall Street Journal article titled “The AI Industry Is Steaming Toward a Legal Iceberg” is self-explanatory:

If your company uses AI to produce content, make decisions, or influence the lives of others, it’s likely you will be liable for whatever it does — especially when it makes a mistake… The implications of this are momentous. Every company that uses generative AI could be responsible under laws that govern liability for harmful speech, and laws governing liability for defective products — since today’s AIs are both creators of speech and products. Some legal experts say this may create a flood of lawsuits for companies of all sizes.

Whether companies that produce AI-generated content can issue strong enough disclaimers to shield themselves from liability remains to be seen (can disclaimers even provide such protection?). Such a terms-of-use disclaimer might read: “Users of this LLM agree to independently verify any information generated by this LLM. The creators of this LLM take no responsibility for how the information generated by this LLM is used.” This would be like disclaimers in books on alternative healing, which shift the burden of liability to mainstream medicine: “This book is not intended to serve as medical guidance. Before acting on any recommendations presented in this book, readers should seek the advice of a physician.” 

But there’s another sense in which the advice to verify the output of LLMs is not at all practical. LLMs allow for the creation of content at a scale unknown till now. They are being used to generate massive amounts of content, causing entire websites to magically materialize. There is now a rush to push out content as a business exigency. Sites that depend purely on humanly written content are likely to lose any competitive advantage. 

How likely is it, then, that such LLM-generated content will be carefully scrutinized and thoroughly vetted? What if this content is untrue but nothing much is riding on its truth? What if no one will hold the content, or its supposed author, to account? In that case, there will be incentives to cut corners and not worry about LLM hallucinations. Others are doing it. LLMs are a force multiplier. The need to accelerate content creation is urgent. So if you want to stay in this rat race, you’ve got to be a rat. 

The Need for Independent Verification

A commitment to verification will put the brakes on content creation from LLMs. Yes, verification will slow you down. But what you lose in quantity you may well regain in quality and credibility (unless you don’t care about these). In fact, if your commitment to verification is thorough-going, you may be justified in putting a disclaimer on your site that inspires confidence, such as: “All content on this site generated with the assistance of LLMs has been independently verified to be true.” 

Of course, you might even prefer a disclaimer that simply reads: “All content on this site was written by humans and produced without the assistance of LLMs.” But such a disclaimer may be hard to maintain, especially if your site is drawing material from other sources that may have used LLMs. All content these days is likely to feel the effects of LLMs. One might say that it has all been infected or tainted by LLMs. But that seems too strong. As long as content generated by LLMs is properly vetted and edited by humans, it should pose no issues.

Obviously, as part of any disclaimer that assures independent verification, such verification must mean not subjecting a claim made by one LLM to another LLM. Consider, for instance, the following query to ChatGPT4 about myself. As it is, many critics of intelligent design characterize me as a pseudoscientist. I regard this characterization as unfair and unjustified, and I have a publication record to back me up on this. But consider the following exchange with Google’s Gemini:

ME: Is William A. Dembski a legitimate scientist?

GEMINI: William A. Dembski’s credentials are interesting. He has a Ph.D. in mathematics and has done published work in various fields. However, his work on intelligent design is considered pseudoscience by the vast majority of scientists… So, while Dembski has scientific credentials, his work on intelligent design is not considered legitimate science.

So let’s now go to OpenAI’s ChatGPT4 and ask it to assess Gemini’s answer:

ME: Please evaluate the following assessment of William A. Dembski’s legitimacy as a scientist: “William A. Dembski’s credentials are interesting. He has a Ph.D. in mathematics and has done published work in various fields. However, his work on intelligent design is considered pseudoscience by the vast majority of scientists… So, while Dembski has scientific credentials, his work on intelligent design is not considered legitimate science.”

CHATGPT4: This assessment of William A. Dembski’s legitimacy as a scientist is accurate… The assessment fairly represents the dichotomy between Dembski’s academic qualifications and the acceptance of his work on intelligent design within the scientific community. It underscores the importance of distinguishing between an individual’s credentials and the scientific validity of their work, based on the norms and standards of the scientific method.

Of course, critics of my work may think that Gemini and ChatGPT4 nailed their assessment of me. But even such critics should appreciate that using one LLM to vet another is a fool’s errand. Independent verification is required. And that may mean (gasp) veering outside of Wikipedia. Portions of Wikipedia are outdated, biased, or simply wrong. Because LLMs tend to draw so heavily on Wikipedia, independent verification means going beyond Wikipedia and, ideally, going to primary sources. 

Yet I also want to urge a deeper skepticism of LLMs. 

A Systemic Fault with LLMs

Up to now, it may seem that I’ve merely been advising caution with LLMs: Verify and only then trust. That certainly seems like sound advice for using LLMs. Yet I also want to urge a deeper skepticism of LLMs. Our knowledge of the world as expressed in language arises from our interactions with the world. We humans engage with a physical world as well as with a world of abstractions (such as numbers) and then form statements in words to describe that engagement. 

What does it mean for such statements to be true? Aristotle defined truth as to say of what is that it is and of what is not that it is not. Truth is thus fundamentally a correspondence relation between our words and the world. Many contemporary philosophers dismiss this understanding of truth, preferring pragmatic or coherentist conceptions of truth, arguing that there’s no rigorous way to characterize the correspondence relation that makes a statement true. 

Frankly, this is a boutique debate among philosophers that has little purchase among ordinary people. The sentence “Allan stole Betty’s purse” is true if the people referred to here exist, if Betty had a purse, and if Allan actually stole it. Whether there’s a way to make good philosophical sense of this correspondence between words and things is in fact irrelevant to our discussion about the truth of what LLMs tell us. LLMs, by being entirely enclosed in a world of words, are unengaged with the wider world that is the basis for our knowledge. 

Let this point sink in. I might know that Allan stole Betty’s purse because I witnessed Allan steal Betty’s purse. But LLMs can have no such experience. They consist of a neural network that assigns weights to relations among words and sentences. Suppose the verbal data that is the basis for this LLM included testimony about Allan’s theft but also claims about Allan being framed for the theft. How, then, does the LLM decide what truly happened? It cannot form a reasoned and responsible decision here as we humans might, weighing evidence and trying to reach a reasonable conclusion. Rather, the LLM’s data and training will determine whether to assign guilt or innocence to Allan.

But who trains the LLM? And who’s training the trainers? What are the guidelines they are expected to follow? And who decides what those guidelines are supposed to be? It’s the old problem of Quis custodiet ipsos custodes? (Who’s minding the minders?). Additionally, who determines the training data for the LLM? And who determines the data to which the LLM may be legitimately applied? Ultimately, the answer to all such questions will point to the decisions of a group of people, such as the programmers at OpenAI. And why should such a group be trusted?

Such questions underscore that LLMs have no substantive connection to truth. It’s not that an LLM knows the truth of what it is claiming. Rather, its training gives it a pretense of truth. An LLM needs as much as possible to give the appearance of being truthful because getting too many things obviously wrong will discredit the LLM. But its accuracy is at best a byproduct of trying to please human users. 

When I asked ChatGPT4 about scary buildings on college campuses, I would have preferred simply to be told “I don’t know.” But instead, LLMs are programmed to pretend to knowledge that they don’t have. Initially, it was gratifying for me to learn about all those scary college buildings. ChatGPT4 seemed to have saved me much effort in research — until, that is, I did an independent verification and actually looked at the images of some of the buildings cited. 

Granted, the pretense of truth in LLMs often yields results that are truthful (Bismarck is indeed the capital of North Dakota). But where there’s divergent testimony in the input data about the truth of a claim, the LLM’s training will decide what it deems true and what it deems false. That makes LLMs fundamentally untrustworthy. 

How Do LLMs Deal with Controversy?

Controversial topics provide an effective proving ground for LLMs and the challenges they face with truth and trust. Take, for instance, the hot-button topic of gender dysphoria. Here is an exchange I had with ChatGPT4

ME: In one paragraph describe the health concerns associated with gender dysphoria.

CHATGPT4: Gender dysphoria, characterized by a strong discomfort or distress due to a discrepancy between one’s gender identity and assigned sex at birth, can have significant health implications if not addressed. Individuals with gender dysphoria are at a higher risk for mental health issues, including anxiety, depression, and suicidal thoughts or behaviors, stemming from the internal conflict and societal pressures or stigma they may face. Additionally, the stress associated with gender dysphoria can lead to or exacerbate physical health problems, such as high blood pressure or cardiovascular disease, due to chronic stress. Those seeking gender-affirming treatments may also encounter health risks related to hormone therapy or surgical procedures, although these risks are often outweighed by the substantial mental health benefits and increased quality of life these treatments can provide. Access to comprehensive healthcare, social support, and gender-affirming treatments is crucial in mitigating the health concerns associated with gender dysphoria.

Note the text in bold, which clearly gives one particular perspective on gender dysphoria to the exclusion of others. It therefore minimizes the alternative view that sees gender-affirming care as deeply problematic and destructive (e.g., Abigail Shrier’s view as expressed in her book Irreparable Harm). To be clear, my point here is not to take sides on this issue. It is simply to point out that LLMs are trained to come up with certain answers, and that this training need not have any connection to ordinary canons of evidence or reason. 

One can try to circumvent the bias built into an LLM by posing the queries or prompts to it in ways that get the LLM itself to bypass the bias. Thus the query about gender dysphoria above might have been rephrased as follows:

ME: In one paragraph describe the health concerns associated with gender dysphoria as psychiatrists and psychologists would have understood them in the 1950s and 1960s (as with DSM1 and DSM2).

DSM1 and DSM2, earlier editions of Diagnostic and Statistical Manual produced back in the 1950s and 60s, regarded gender dysphoria as a mental illness and would therefore not have recommended anything like what is meant today by gender-affirming care. In that light, consider ChatGPT4’s response to this query:

In the 1950s and 1960s, the understanding of gender dysphoria by psychiatrists and psychologists, as reflected in the first two editions of the Diagnostic and Statistical Manual of Mental Disorders (DSM-I and DSM-II), was markedly different from today’s perspectives. During this era, gender dysphoria—or discomfort with one’s assigned gender at birth—was not explicitly categorized as it is now. Instead, related conditions might have been classified under broader, more generalized disorders such as “sexual deviation” or “transvestism” within the DSM. The medical and mental health communities of the time often viewed such gender nonconformity through a pathologizing lens, considering it a mental illness requiring treatment aimed at realigning the individual’s gender identity with their biological sex. The focus was less on the mental health implications of gender dysphoria itself — such as anxiety, depression, and identity distress — and more on the behaviors and identities diverging from societal norms. There was little recognition or understanding of the health benefits of gender-affirming care, and treatments were often aimed at suppressing the individual’s gender identity rather than supporting it.

This response answers the query, but it does more. The portion in bold, which speaks directly to the query, is correct as far as it goes. But note also that the entire response is framed in terms of “that was then, this is now, and the present perspective is better.” This obsessive need not just to answer a query but also to editorialize about it underscores the bias in LLMs as they exist now. 

Redressing Bias in LLMs

LLMs put a premium on producing intelligible language. Their outputs need to be context-sensitive, convey competence, and sound plausible. Much intelligible language, to be intelligible, needs to be true. An LLM that insists that the moon is made of cheese won’t soon prosper. Yet much intelligible language is mistaken or deceptive. Indeed, intelligibility has no necessary connection with truth. That’s why there’s no way to predict when LLMs will hallucinate and when they won’t. 

For example, I once queried ChatGPT4 about a theology book (Alexander Schmeeman’s For the Life of the World) that I had carefully read and used in a course I taught. Specifically, I queried ChatGPT4 about a chapter in the book that offered deep insights into the secularization of culture. ChatGPT4 was familiar with the book, but when I asked it about the chapter in question, it just made stuff up, providing trite insights about secularization that had nothing to do with that chapter. Despite repeated pressing on my part that it was off and that it hadn’t correctly answered the question, it perseverated in the line it took, refusing to take correction. 

It would be to the credit of LLMs if they could be programmed to avoid biases at odds with truth. But the fact is that such biases are baked into LLMs. The gender dysphoria example above makes clear that ChatGPT4 has been guided to answer questions about gender dysphoria from a certain perspective. Now it might be argued that those who trained and programmed ChatGPT4 have adopted the right and true perspective on this matter. 

But even if ChatGPT4 is getting things right on the topic of gender dysphoria, what guarantee do we have that it will get things right on other controversial topics? If it did get things right here, did it do so because ChatGPT4 was programmed to be inherently truth-seeking? The answer, obviously, is no. If ChatGPT4 is getting gender dysphoria right, it is a fortunate accident of its programming and training history, not that it was reliably designed to arrive at truth from its input data and bypass any bias from its programmers and trainers. 

What happens if programmers and trainers who take a different perspective on a controversial topic take charge of an LLM? Won’t the LLM then provide different answers? The roadblocks to LLMs providing truth and deserving trust are thus, as we’ve seen, systemic. The huge volume of texts that provide the raw material on which LLMs are based will contain conflicting messages, and so it will be up to the trainers and programmers to adjudicate among those messages, promoting those they prefer and demoting those they reject. 

Now one might try to get around biases in LLMs by judicious “prompt engineering.” Thus one could formulate a prompt so that it asks for a particular perspective. I did this in the gender dysphoria example when I asked ChatGPT4 how this question might have been answered in light of DSM1 and DSM2. Thus, for divergent perspectives A and B, one might prompt an LLM for an analysis from the vantage of someone who holds to perspective A, and then a second analysis from the vantage of someone who holds to perspective B. 

But that still doesn’t get around whether the LLM has been programmed to advance a particular point of view (as we saw bleed through in the gender dysphoria example). And even if bias in LLMs can be minimized, the deeper problem remains that these systems work because they produce intelligible texts, not because they produce true texts. Truth in LLMs is at best a happy byproduct and often an unfortunate casualty. 

Final Gripe: LLM Prissiness

Even though LLMs have a systemic problem with truth and trust, they can be improved by mitigating bias. Perhaps the clearest example where bias in LLMs can be mitigated concerns their prissiness in handling indelicate topics. It’s as though existing LLMs constantly have a censor looking over their shoulder to ensure that they don’t say anything that will get them or their host company in trouble. 

Before LLMs became widely popular, there was some history of chatbots uttering language unacceptable for public consumption. For instance, Microsoft’s Tay was suborned by users to utter racist slurs. There’s no danger of that happening anymore. The current LLMs have been thoroughly brainwashed to avoid offensive speech. 

For example, I wanted an LLM to comment on some paragraphs that contained the words “had sex.” It refused to answer my query. The LLM didn’t even try to rebuke me for my query. Instead, I got an error message indicating that the LLM wouldn’t even consider my query. I therefore substituted for “had sex” the phrase “ate muffins.” Now the LLM answered my query (requiring me, in interpreting its response, to make the appropriate mental substitution for “ate muffins”). 

Thomas Bowdler’s The Family Shakespeare, in which anything that might be deemed offensive in Shakespeare had to be euphemized, was bad for Shakespeare. Bowdlerization is alive and well at Google and OpenAI, and it’s particularly bad for LLMs. LLMs can therefore be made better by avoiding the bias of prissy censorship. In general, bias comes in degrees, and so bias can to some degree be mitigated. Perhaps once Elon Musk gets up to steam with his LLM, it will avoid the more extreme bias of ChatGPT4 or Gemini.

But the issue of truth and trust will remain regardless. LLMs are subject to one overriding imperative, namely, to talk/write like an intelligent human language user. But intelligence can be had without truth. And without truth, there can be no trust. This is the inherent liability of LLMs that no amount of technological finessing will resolve. 

Postscript

The image at the top of this article is from DALL-E. Here’s my prompt for the image and DALL-E’s justification for the image:

ME: Please come up with an image that suggests much reason for skepticism and doubt about the output of large language models such as yourself. The image should be graphic, super realistic, and cast a pall over the whole enterprise of taking LLMs too seriously. Have at it.

DALL-E: Here’s an image that embodies skepticism and doubt about taking large language models too seriously. It portrays a scene that visually communicates the complexities and potential uncertainties associated with relying heavily on these technologies.

DALL-E’s explanation of the image is lame. And yet the image does capture some underlying signal in DALL-E’s neural network, namely, that low-tech human inventions like books can encapsulate truth more effectively than high-tech LLMs.

Tuesday 2 April 2024

On separating the hype from the reality in science news.

 From Scientists and Science Media, a Flood of Disinformation


Spending time perusing popular science media outlets can give one the impression that much of science is on the verge of being overturned. Within the last month, samples of sensationalized science reporting, reported below, suggest that major questions on naturalistic abiogenesis have been solved, stellar astrophysics is misunderstood, the Big Bang model is being refuted, and that the cosmological and astronomical conclusion of dark matter is unwarranted.

Big Claims, Little Evidence

Here’s a sampling of science reporting that makes bigger claims than the evidence supports. 

At Science Daily, “A new study shows how the chemical properties of RNA molecules could have facilitated the emergence of complex life.” This one focuses on how an RNA strand could grow in a water-covered early-Earth environment, but it ignores the deeper problem of how information coding for functional, life-essential biochemistry could arise naturally.
“Why Is Life Left-Handed? We Might Finally Know.” Sabine Hossenfelder reviews recent research suggestions that the unique “handedness” of bio-relevant molecules, such as the amino acids that comprise proteins, arose from a preferential interaction of the Earth’s magnetic field and molecules of a certain chirality. Remnant magnetization of some rocks perhaps complemented the effect. Suggestions such as these, or others involving the rotation of the Earth, come nowhere near to answering the question of how to explain the strict homochirality of biomolecules. And again, the issue is a distraction from the deeper problem of how the instruction set for building all of the thousands of different life-essential proteins out of homochiral amino acids could have come about by natural processes.
At Physics Magazine, “Heavy Element Quandary in Stars Worsened by New Nuclear Data.” Researchers conclude that their results on cerium nucleosyntheses mean “there is something we don’t understand about how nucleosynthesis happens.” This has been the state of affairs ever since the dawn of nuclear physics. Particle physics experiments, in conjunction with stellar astrophysics, has advanced our understanding of nucleosynthesis, but admitting some uncertainty in the formation process of a rare element is not to be equated with a breakdown in astrophysics and cosmological models. Further research will undoubtedly refine our understanding of this particular issue of stellar elemental abundance.
At Closer to Truth, “Roger Penrose — Did the Universe Begin?” Without any evidence, Penrose postulates an infinite sequence of expanding universes, arguing that after infinite time, an infinitely expanded universe would become spatially equivalent to the singularity representing the big bang of a subsequent universe. In a conversation with Brian Keating and Justin Brierley, Stephen Meyer critiques the Penrose proposal, citing other cosmologists to conclude that “Penrose is just speculating,” invoking a physical field with “god-like properties.” At 10:11, Penrose tips his hand with a statement that reveals his personal motivation to avoid a true beginning to the universe: “There’s something within us all that would like an eternal universe.”
From EurekAlert!, “Did the first cells evolve in soda lakes?” Lake water with dissolved sodium and carbonate species is shown to allow a very slight nucleotide extension of RNA as well as the formation of fatty-acid membranes, once researchers added fatty acids to the water. The researchers’ conclusion trumpets unwarranted enthusiasm for solving the problem of abiogenesis: “Taken together, our results suggest that natural soda lakes…could have supported the formation of the earliest cellular life.” Again, pronouncements such as these reveal a willful blindness, ignoring major issues that must be overcome for any origin-of-life scenario to approach feasibility. James Tour’s challenge to researchers still stands.
At Science Daily, “New research suggests that our universe has no dark matter.” In this audacious study, researchers dismiss two well-established conclusions of physics, namely the constancy of the forces of nature and the constant value of the speed of light in vacuum, in order to find room for their theory suggesting that the universe has no room for dark matter. Aside from the problem of their unsupported assumptions, their conclusion fails to account for multiple other lines of evidence that point to the existence of dark matter. 

An Unfortunate Practice

These exaggerated reports exemplify an unfortunate practice often employed by those who seek to make a case for a novel scientific conclusion: glamorize one thread of evidence while ignoring the entire tapestry of evidence that stands against a favored conclusion.

I’ll venture to suggest that this flood of disinformation represents a strategy to prop up the faltering worldview of materialism. The effect of the deluge is twofold — one is to mislead the uninformed into thinking that established science doesn’t really support the conclusions of intelligent design. The other is to attempt to wear out those who defend ID with an incessant barrage of news soundbites insinuating that recent discoveries or theoretical speculations stand to throw established science into the trash bin.

What’s an appropriate response to this trend of disinformation? The work of Discovery Institute has long served effectively to counter the negative influence of materialism. Emphasizing established scientific principles that broadly eliminate whole classes of speculations is an approach that will eventually shift the consensus towards the truth of reality. For example, using the conclusions of information theory will negate any speculation implying natural processes alone can form the information-rich biomolecules within living cells. Soda lakes (pictured above), magnetized rock substrates, RNA chemistry, or any other natural scheme will always fail to account for the organization of the complex biochemical processes necessary for any living organism. 

Discover to Uncover

Likewise, while various details about stellar formation and nucleosynthesis, or exactly when the first galaxies formed, may be refined by new observations, the lines of observational evidence and theoretical conclusions pointing to a singularity event at the beginning of our universe are well established. Dark matter and dark energy are more than mere placeholders for ignorance but are rather postulates reached by considerations of multiple lines of observational evidence, coupled with well-established physical theories. We can anticipate that ongoing efforts to characterize these phenomena will lead to fascinating illuminations of what we now call “dark.” Furthermore, if the historical trend of advancing scientific knowledge continues, the more we discover, the more evidence for fine-tuning and design will be uncovered.

Monday 1 April 2024

Acts Chapter 2 New International version

2.When the day of Pentecost came, they were all together in one place. 2Suddenly a sound like the blowing of a violent wind came from heaven and filled the whole house where they were sitting. 3They saw what seemed to be tongues of fire that separated and came to rest on each of them. 4All of them were filled with the Holy Spirit and began to speak in other tongues a as the Spirit enabled them.


5Now there were staying in Jerusalem God-fearing Jews from every nation under heaven. 6When they heard this sound, a crowd came together in bewilderment, because each one heard their own language being spoken. 7Utterly amazed, they asked: “Aren’t all these who are speaking Galileans? 8Then how is it that each of us hears them in our native language? 9Parthians, Medes and Elamites; residents of Mesopotamia, Judea and Cappadocia, Pontus and Asia, b 10Phrygia and Pamphylia, Egypt and the parts of Libya near Cyrene; visitors from Rome 11(both Jews and converts to Judaism); Cretans and Arabs—we hear them declaring the wonders of God in our own tongues!” 12Amazed and perplexed, they asked one another, “What does this mean?”


13Some, however, made fun of them and said, “They have had too much wine.”


14Then Peter stood up with the Eleven, raised his voice and addressed the crowd: “Fellow Jews and all of you who live in Jerusalem, let me explain this to you; listen carefully to what I say. 15These people are not drunk, as you suppose. It’s only nine in the morning! 16No, this is what was spoken by the prophet Joel:


17“ ‘In the last days, God says,


I will pour out my Spirit on all people.


Your sons and daughters will prophesy,


your young men will see visions,


your old men will dream dreams.


18Even on my servants, both men and women,


I will pour out my Spirit in those days,


and they will prophesy.


19I will show wonders in the heavens above


and signs on the earth below,


blood and fire and billows of smoke.


20The sun will be turned to darkness


and the moon to blood


before the coming of the great and glorious day of the Lord.


21And everyone who calls


on the name of the Lord will be saved.’ c


22“Fellow Israelites, listen to this: Jesus of Nazareth was a man accredited by God to you by miracles, wonders and signs, which God did among you through him, as you yourselves know. 23This man was handed over to you by God’s deliberate plan and foreknowledge; and you, with the help of wicked men, d put him to death by nailing him to the cross. 24But God raised him from the dead, freeing him from the agony of death, because it was impossible for death to keep its hold on him. 25David said about him: Therefore my heart is glad and my tongue rejoices;


my body also will rest in hope,


27because you will not abandon me to the realm of the dead,


you will not let your holy one see decay.


28You have made known to me the paths of life;


you will fill me with joy in your presence.’ e


29“Fellow Israelites, I can tell you confidently that the patriarch David died and was buried, and his tomb is here to this day. 30But he was a prophet and knew that God had promised him on oath that he would place one of his descendants on his throne. 31Seeing what was to come, he spoke of the resurrection of the Messiah, that he was not abandoned to the realm of the dead, nor did his body see decay. 32God has raised this Jesus to life, and we are all witnesses of it. 33Exalted to the right hand of God, he has received from the Father the promised Holy Spirit and has poured out what you now see and hear. 34For David did not ascend to heaven, and yet he said,


“ ‘The LORD said to my Lord:


“Sit at my right hand


35until I make your enemies


a footstool for your feet.” ’ f


36“Therefore let all Israel be assured of this: God has made this Jesus, whom you crucified, both Lord and Messiah.”


37When the people heard this, they were cut to the heart and said to Peter and the other apostles, “Brothers, what shall we do?”


38Peter replied, “Repent and be baptized, every one of you, in the name of Jesus Christ for the forgiveness of your sins. And you will receive the gift of the Holy Spirit. 39The promise is for you and your children and for all who are far off—for all whom the Lord our God will call.”


40With many other words he warned them; and he pleaded with them, “Save yourselves from this corrupt generation.” 41Those who accepted his message were baptized, and about three thousand were added to their number that day.


42They devoted themselves to the apostles’ teaching and to fellowship, to the breaking of bread and to prayer. 43Everyone was filled with awe at the many wonders and signs performed by the apostles. 44All the believers were together and had everything in common. 45They sold property and possessions to give to anyone who had need. 46Every day they continued to meet together in the temple courts. They broke bread in their homes and ate together with glad and sincere hearts, 47praising God and enjoying the favor of all the people. And the Lord added to their number daily those who were being saved.

On the iconoclasm unleashed by the four horsemen.

 Dawkins, Dennett, and the Taste for Iconoclasm


I’ve had two memorable encounters with Richard Dawkins and Daniel Dennett, one with Dennett alone, the other with both together. The first was with Dennett alone. In 1999, my good friend Paul Nelson and I were doing some tag-team talks on intelligent design in the Boston area at MIT and Tufts. The talk at Tufts, Dennett’s university, was in the evening, so we had some time beforehand. We therefore decided to attend Dennett’s philosophy of science seminar, which was taking place late that afternoon. 

The Aquatic Ape

Dennett was hosting Elaine Morgan, an evolutionary anthropologist. In her talk, she described her “aquatic ape hypothesis.” According to this hypothesis, in the evolution of humans from hairy ape-like ancestors, we had an aquatic phase in which we lived primarily in water. Our water phase is supposed to account for a number of human characteristics, including our smooth skin. 

Dennett didn’t accept Morgan’s aquatic ape hypothesis, but he did give her a forum. He saw it as helpful to his students and colleagues to engage her. And he himself has defended her view in his 1995 book Darwin’s Dangerous Idea:

Many of the counterarguments seem awfully thin and ad hoc. During the last few years when I have found myself in the company of distinguished biologists, evolutionary theorists, paleo-anthropologists, and other experts, I have often asked them just to tell me, please, exactly why Morgan must be wrong about the aquatic ape theory. I haven’t yet had a reply worth mentioning, aside from those who admit, with a twinkle in their eyes, that they have often wondered the same thing

In her talk at Tufts, Morgan was affable, and she made a detailed case for her position. She also made available for sale at the talk her then recently published book The Aquatic Ape Hypothesis, which I bought and had her sign (it remains somewhere in storage). 

Near the start of the seminar, Dennett noted that Paul and I were in the audience and would be speaking at Tufts that evening on intelligent design. He then remarked: “If your taste for iconoclasm exceeds even mine, attend their talk tonight.” It was an amusing shout-out. Dennett clearly regarded Elaine Morgan as an iconoclast. Yet to his mind, her iconoclasm was far less than Paul’s and mine in advancing intelligent design.  

To my mind, Morgan is the bigger iconoclast, and intelligent design is sound sober theorizing about biological origins and complexity. Yet as Dennett could rightly note in reply, the taste for iconoclasm is a matter of taste. For whatever reason, Dennett didn’t attend our talk that night.

“The Deniable Darwin”

My second story involves both Richard Dawkins and Daniel Dennett, and occurred in 2003. I was editing for ISI Books an anthology titled Uncommon Dissent: Intellectuals Who Find Darwinism Unconvincing. I wanted to include not just David Berlinski’s article “The Deniable Darwin” that had appeared in Commentary in June 1996 but also the some of the letters that had been written in response to the article in the September 1996 issue. (The links here give you everything that appeared in Commentary!)

David’s piece was 10 double-columned pages. The responses took up 30 double-columned pages. The responses ranged from pro-ID people who were favorable to David’s piece (notably Phillip Johnson and Michael Behe) to a who’s who of Darwinists opposed (notably Richard Dawkins and Daniel Dennett). There was also a closing response to all the letters from Berlinski himself.

The anthology I was editing didn’t have room for all the letters, but I wanted a representative sampling of letters by the Darwinists. In particular, I wanted to reprint the letters by Dawkins and Dennett. So I contacted them to ask permission. Note that Commentary would have been happy to grant permission, but the copyright for the letters belonged to the letter writers. 

Both Dawkins and Dennett refused to grant permission to reprint their letters. I asked them to reconsider, but they refused again. Finally, I decided that the anthology would be richer if, even without their actual letters, the substance of what was in those letters were reprinted. I therefore paraphrased the letters and sent my paraphrase to Dawkins and Dennett to ask whether they thought I had represented their original letters accurately.

Instantly both got back and gave me permission to reprint their letters, preferring their words to mine. There’s no doubt a lesson here: we prefer our own words to those of others who would speak for us.

Saturday 30 March 2024

Darwinism's LVPs continue to double down on the argument by misrepresentation and name calling

 “Creation Myths” Misquotes and Misrepresents Junk DNA Video


A YouTube channel called Creation Myths put up a new offering that claims, “Discovery Institute recently put out a video on junk DNA that contains all the usual lies about junk DNA, plus a few other tricks to make their audience think they’re on the up-and-up. We’re going to talk about it.” This is in reference to our new Long Story Short video:

Sure, Let’s Talk About It

Before I get too far into this, I need to say that it’s probably best to be cautious when dealing with “Creation Myths,” who also identifies as “Dr. Dan” and is a genuine PhD biologist at Rutgers. He’s an anti-ID YouTuber who has exhibited an unfortunate unwillingness to acknowledge black-and-white facts that are in favor of intelligent design. To be specific, two years ago I did a debate on the Unbelievable show, and Creation Myths left a comment stating that the ID research program “Hasn’t advanced past where it was in 2004/2005. Where’s the research program? Where are the papers?” Well, I had already provided this evidence in spades throughout the debate. He was just unwilling to accept these realities.

During the debate I discussed multiple ID 3.0 research projects that we fund, and discussed multiple papers that have been published through this research program. I don’t usually speak like this, but it’s important to get a sense of what we’re dealing with here: Creation Myth’s unwillingness to acknowledge the simple fact that the ID research program has advanced since 2004/2005 — and is publishing papers — does not inspire confidence in his ability to handle this debate fairly. He’s welcome to disagree with ID, but to deny our research program exists or that it’s publishing papers is simply to deny reality. He invited me to go on his channel, but I’m sorry, I don’t think it’s fruitful to dialogue with people who cannot acknowledge unambiguous facts. So it’s important to understand the temperament of debaters and their tactics before engaging with them. 

What We Actually Said

With that, my first point provides another disturbing example of how Creation Myths operates. He has now left another comment, this one on our YouTube Channel, which directly misquotes the new Long Story Short video on junk DNA. His comment claims the video says “it was assumed that the other 98 percent was junk.” Those are words that Creation Myths puts in quote marks but they were never stated in the video! 

Here’s what we actually said: it was “assumed that it was largely junk.” Words matter and we did not say it was assumed that the entire other “98 percent was junk.”

So did evolutionists say the genome was “largely” junk? Of course they did! Creation Myths recommends Laurence Moran’s book that says “90 percent” of the genome is junk, and then there’s Richard Dawkins who said 95 percent is junk. In other words, “largely” junk. What we said is accurate and defensible.

In fact, had we used the more aggressive language that Creation Myths falsely claims we did, there might even be authorities to support such a claim. As I recently noted, a 2021 article in American Scientist said that “Close to 99 percent of our genome has been historically classified as noncoding, useless ‘junk’ DNA”! So it’s clear that we are accurate in saying that many evolutionists view the genome as “largely” junk — if anything, that may be an understatement.

Creation Myths has overstated our argument in order to make it look unreasonable. This is a common tactic from junk DNA defenders. 

Second, Creation Myths wants to have it both ways — he wants to essentially say evolutionists never said DNA was largely junk, but that nonetheless the genome really is largely junk. So there’s an internal contradiction in his framing.

The Bigger Issues

With that, my first point provides another disturbing example of how Creation Myths operates. He has now left another comment, this one on our YouTube Channel, which directly misquotes the new Long Story Short video on junk DNA. His comment claims the video says “it was assumed that the other 98 percent was junk.” Those are words that Creation Myths puts in quote marks but they were never stated in the video! 

Here’s what we actually said: it was “assumed that it was largely junk.” Words matter and we did not say it was assumed that the entire other “98 percent was junk.”

So did evolutionists say the genome was “largely” junk? Of course they did! Creation Myths recommends Laurence Moran’s book that says “90 percent” of the genome is junk, and then there’s Richard Dawkins who said 95 percent is junk. In other words, “largely” junk. What we said is accurate and defensible.

In fact, had we used the more aggressive language that Creation Myths falsely claims we did, there might even be authorities to support such a claim. As I recently noted, a 2021 article in American Scientist said that “Close to 99 percent of our genome has been historically classified as noncoding, useless ‘junk’ DNA”! So it’s clear that we are accurate in saying that many evolutionists view the genome as “largely” junk — if anything, that may be an understatement.

Creation Myths has overstated our argument in order to make it look unreasonable. This is a common tactic from junk DNA defenders. 

Second, Creation Myths wants to have it both ways — he wants to essentially say evolutionists never said DNA was largely junk, but that nonetheless the genome really is largely junk. So there’s an internal contradiction in his framing.

The Bigger Issues

Third, and this bring us to the bigger issues, let’s look at the paper he cites, Kellis et al. (2014). It’s authored by quite a few prominent ENCODE scientists, and despite what Creation Myths asserts, this paper does not “walk back” central claims of their major 2012 Nature ENCODE paper. That 2012 paper is cited by our video because it reported evidence that 80 percent of the genome is biochemically functional. As the 2012 paper stated, “These data enabled us to assign biochemical functions for 80 percent of the genome, in particular outside of the well-studied protein-coding regions.” 

And note what our video says — we did not claim that the 2012 Nature paper said 80 percent MUST be functional. Our video correctly states that the 2012 Nature paper from ENCODE “found 80 percent of DNA shows evidence of functional biochemical activity.” That’s absolutely true and totally consistent with what the 2012 ENCODE paper said and what the evidence says. And no one has walked that claim back. 

I’d like to ask Creation Myths to provide the exact statement where Kellis et al. (2014) denies or “walks back” ENCODE’s claim that 80 percent of the genome shows evidence of biochemically functionality. He won’t find it because they never retracted that evidence.

Fourth, Kellis et al. (2014) does cite the major 2012 Nature ENCODE paper — not to retract it but rather it cites this paper very affirmatively. But we’ll get to that in a moment. Before we delve into the paper, here’s a revealing question:

If ENCODE was really “walking back” their claims, then why did lead ENCODE researcher Ewan Birney — who is a co-author on Kellis et al. (2014) — say the following just a couple of weeks before the Kellis paper was published: “There is not a single place in the genome that doesn’t have something that you might think could be controlling something else.”

That quote from Birney came from his comments in Bhattacharjee (2014), an article published in the journal Science as a response to ENCODE critic Dan Graur who had claimed that the genome is largely junk. So it’s totally on point and in context.

Birney is effectively saying that the entire genome appears to be functional — i.e., “There is not a single place in the genome that doesn’t have something that you might think could be controlling something else.” And he said what he did around the same time he co-authored the Kellis paper (the statement was published on March 21, 2014, and Kellis et al. came out on April 29, 2014, but was probably submitted months earlier). So Ewan Birney cannot be understanding his own paper to be walking back the claim about 80 percent functionality because clearly he thinks the genome is almost entirely functional.

Creation Myths Is Bluffing. How Do I Know?

Fifth, you have to understand what kind of paper Kellis et al. (2014) is. It’s a perspectives or review paper that compares and contrasts three different methods of investigating genome function: evolutionary, genetic, and biochemical. They set up this framing in the introduction, stating: “Geneticists, evolutionary biologists, and molecular biologists apply distinct approaches, evaluating different and complementary lines of evidence.”

The paper then discusses the strengths and weaknesses of each method, and discusses estimates of function for the genome for each method, offering pros and cons for each estimate and the methods used to infer those calculations. No method is perfect or foolproof, and they look at strengths and weaknesses of the evolutionary method and of the biochemical method. That’s what you do in a review paper. Well, what method do they ultimately favor? We’ll get there in just a moment. 

Now, the biochemical method is the method that ENCODE used to find evidence of function for 80 percent of the genome. Kellis et al. (2014) note that the evolutionary method, in contrast, predicts that no more than 20 percent of the genome is functional, and the rest is junk: 

The estimated mutation rate in protein-coding genes suggested that only up to ∼20% of the nucleotides in the human genome can be selectively maintained, as the mutational burden would be otherwise too large. The term “junk DNA” was coined to refer to the majority of the rest of the genome, which represent segments of neutrally evolving DNA.

So which method does Kellis think is the most reliable? You guessed it — they strongly favor the biochemical method. Here’s the conclusion of the paper:

In contrast to evolutionary and genetic evidence, biochemical data offer clues about both the molecular function served by underlying DNA elements and the cell types in which they act, thus providing a launching point to study differentiation and development, cellular circuitry, and human disease (14, 35, 69, 111, 112). The major contribution of ENCODE to date has been high-resolution, highly-reproducible maps of DNA segments with biochemical signatures associated with diverse molecular functions. We believe that this public resource is far more important than any interim estimate of the fraction of the human genome that is functional.

They are saying that the “biochemical data offer clues about both the molecular function served by underlying DNA elements and the cell types in which they act” and that ENCODE’s application of this method provides reliable data that “is far more important than any interim estimate of the fraction of the human genome that is functional.” They affirmatively cite five papers in saying this. Guess which citation is #69? It’s the major 2012 ENCODE paper in Nature which said that 80 percent of the genome is biochemically functional. They are citing it to say that the approach taken in that paper gives results that are “far more important than any interim estimate of the fraction of the human genome that is functional.”

So it’s true that in this particular paper they aren’t asserting any particular fraction of the human genome that is functional — but neither are they denying or “walking back” the 80 percent statistic either. They are clearly endorsing the biochemical approach in ENCODE’s 2012 paper. That means they don’t think the evolutionary approach is going to give you the best answer. Instead, they prefer the biochemical method, which uncovered evidence of function for 80% of the genome. 

Critical of Evolutionary Estimates

Sixth, also noteworthy is that the Kellis paper is highly critical of evolutionary estimates of the fraction of the genome that is functional. Here’s how it describes the evolutionary view:

[T]he biochemically active regions cover a much larger fraction of the genome than do evolutionarily conserved regions, raising the question of whether nonconserved but biochemically active regions are truly functional. Many examples of elements that appear to have conflicting lines of functional evidence were described before the Encyclopedia of DNA Elements (ENCODE) Project, including elements with conserved phenotypes but lacking sequence-level conservation, conserved elements with no phenotype on deletion, and elements able to drive tissue-specific expression but lacking evolutionary conservation. … A high level of sequence conservation between related species is indicative of purifying selection, whereby disruptive mutations are rejected, with the corresponding sequence deemed to be likely functional.

They are saying that the “biochemical data offer clues about both the molecular function served by underlying DNA elements and the cell types in which they act” and that ENCODE’s application of this method provides reliable data that “is far more important than any interim estimate of the fraction of the human genome that is functional.” They affirmatively cite five papers in saying this. Guess which citation is #69? It’s the major 2012 ENCODE paper in Nature which said that 80 percent of the genome is biochemically functional. They are citing it to say that the approach taken in that paper gives results that are “far more important than any interim estimate of the fraction of the human genome that is functional.”

So it’s true that in this particular paper they aren’t asserting any particular fraction of the human genome that is functional — but neither are they denying or “walking back” the 80 percent statistic either. They are clearly endorsing the biochemical approach in ENCODE’s 2012 paper. That means they don’t think the evolutionary approach is going to give you the best answer. Instead, they prefer the biochemical method, which uncovered evidence of function for 80% of the genome. 

Critical of Evolutionary Estimates

Sixth, also noteworthy is that the Kellis paper is highly critical of evolutionary estimates of the fraction of the genome that is functional. Here’s how it describes the evolutionary view:

[T]he biochemically active regions cover a much larger fraction of the genome than do evolutionarily conserved regions, raising the question of whether nonconserved but biochemically active regions are truly functional. Many examples of elements that appear to have conflicting lines of functional evidence were described before the Encyclopedia of DNA Elements (ENCODE) Project, including elements with conserved phenotypes but lacking sequence-level conservation, conserved elements with no phenotype on deletion, and elements able to drive tissue-specific expression but lacking evolutionary conservation. … A high level of sequence conservation between related species is indicative of purifying selection, whereby disruptive mutations are rejected, with the corresponding sequence deemed to be likely functional.

We will address this objection head-on in a forthcoming Long Story Short video on junk DNA that’s been in production since long before Creation Myths posted its objections. The objection is fallacious because it assumes DNA can only be functional if it is “evolutionarily conserved.” But that view further assumes that evolutionary processes are the only way to produce function in the genome. If you can have function outside of “conserved” regions because evolution isn’t what generated the genome, then the argument falls apart. So this evolutionary argument effectively assumes the truth of evolution and boils down to a circular argument. 

Kellis et al. (2014) offers some additional arguments against the “evolutionary approach” to discerning function:  

has limitations. Identification of conserved regions depends on accurate multispecies sequence alignments, which remain a substantial challenge. Alignments are generally less effective for distal-acting regulatory regions, where they may be impeded by regulatory motif turnover, varying spacing constraints, and sequence composition biases (17, 49). Analyzing aligned regions for conservation can be similarly challenging. First, most transcription factor-binding sequences are short and highly degenerate, making them difficult to identify. Second, because detection of neutrally evolving elements requires sufficient phylogenetic distance, the approach is well suited for detecting mammalian- conserved elements, but it is less effective for primate-specific elements and essentially blind to human-specific elements. Third, certain types of functional elements such as immunity genes may be prone to rapid evolutionary turnover even among closely related species. More generally, alignment methods are not well suited to capture substitutions that preserve function, such as compensatory changes preserving RNA structure, affinity-preserving substitutions within regulatory motifs, or mutations whose effect is buffered by redundancy or epistatic effects. Thus, absence of conservation cannot be interpreted as evidence for the lack of function.”

That’s a potent critique. It says some sequences cannot be compared or aligned because they are TOO different, and those differences might in fact encode functions! The evolutionary approach might be missing some of the sequences that encode differences between species. And it notes that “absence of conservation” does not mean “lack of function.”  

So Kellis et al. (2014) gets the logic right: while conservation strongly implies function, the converse is not necessarily true: absence of conservation does not necessarily mean lack of function. In other words, they blew the “If it ain’t conserved you can’t say it’s functional” objection out of the water.

Seventh, there are additional noteworthy statements from the Kellis et al. (2014) showing they did not walk back the claim about 80 percent of the genome:

“[T]he noncoding regions of the human genome harbor a rich array of functionally significant elements with diverse gene regulatory and other functions.”
They still see the non-coding genome as having a “rich array of functionally significant elements.” This is NOT the traditional evolutionary view.

“[F]unction in biochemical and genetic contexts is highly particular to cell type and condition, whereas for evolutionary measures, function is ascertained independently of cellular state but is dependent on environment and evolutionary niche.”
This suggests that something can still be functional in a biochemical context, though in an evolutionary sense it can be hard to determine if it’s “functional.”

“The methods also differ widely in their false-positive and false-negative rates, the resolution with which elements are defined, and the throughput with which they can be surveyed.”
So genetics and molecular biology, on one hand, and evolutionary measures on the other have different rates of “false negatives” for function. This is a polite way of saying that the evolutionary approach often wrongly says things aren’t functional.

But the junk DNA advocate will say that the biochemical approach might lead to false positives of function. The paper acknowledges this: “[A]lthough biochemical signatures are valuable for identifying candidate regulatory elements in the biological context of the cell type examined, they cannot be interpreted as definitive proof of function on their own.”

That’s fair. We haven’t studied in detail every single time that the genome is transcribed to see what it’s doing, and there could be some “transcriptional noise” — the “junk RNA” view. We’ve also addressed this in the earlier Long Story video. Is this tantamount to “walking back” the 80% statistic? Not in the lease. 

Random Noise or “Reproducible Biochemical Activity”

To defeat the 80 percent statistic, junk DNA defenders need there to be a huge amount of random noise in transcription. It’s possible that some transcription is random noise. But if much or most or nearly all of this transcription is noise, then cells are wasting colossal resources, and that would be highly deleterious to an organism, and would likely be selected against. So we have good reason off the bat to doubt that this transcription is largely random. 

Indeed, the paper has an opinion on this and it prefers the view that transcription is non-random and functional. Kellis et al. (2014) note that even if there is some transcriptional noise, there’s far more going on in cells than we would expect if most of the genome were genetic junk:

Thus, unanswered questions related to biological noise, along with differences in the resolution, sensitivity, and activity level of the corresponding assays, help to explain divergent estimates of the portion of the human genome encoding functional elements. Nevertheless, they do not account for the entire gulf between constrained regions and biochemical activity. Our analysis revealed a vast portion of the genome that appears to be evolving neutrally according to our metrics, even though it shows reproducible biochemical activity, which we previously referred to as “biochemically active but selectively neutral” (68). It could be argued that some of these regions are unlikely to serve critical functions, especially those with lower-level biochemical signal. However, we also acknowledge substantial limitations in our current detection of constraint, given that some human-specific functions are essential but not conserved and that disease-relevant regions need not be selectively constrained to be functional. Despite these limitations, all three approaches are needed to complete the unfinished process of inferring functional DNA elements, specifying their boundaries, and defining what functions they serve at molecular, cellular, and organismal levels.

The key words there are “reproducible biochemical activity.” 

ENCODE’s results suggest that a cell’s type and functional role in an organism are critically influenced by complex and carefully orchestrated patterns of expression of RNAs inside that cell. As another Kellis et al. co-author, John Stamatoyannopoulos, observed in a 2012 paper, ENCODE found that “the majority of regulatory DNA regions are highly cell type-selective,” and “the genomic landscape rapidly becomes crowded with regulatory DNA as the number of cell types” studied increases. Or, as two other ENCODE researchers explain in a 2013 paper, “Assertions that the observed transcription represents random noise … is more opinion than fact and difficult to reconcile with the exquisite precision of differential cell-and tissue — specific transcription in human cells.”

Stamatoyannopoulos (2012) further finds that repetitive DNA (often called “transposable elements”), which comprises over 50 percent of our genome, is active only in specific cell types. This nonrandom transcription of repetitive DNA into RNA suggests that transposable elements have functions whose importance are on par with other gene regulation mechanisms. He writes:

In marked contrast to the prevailing wisdom, ENCODE chromatin and transcription studies now suggest that a large number of transposable elements encode highly cell type-selective regulatory DNA that controls not only their own cell-selective transcription, but also those of neighboring genes. Far from an evolutionary dustbin, transposable elements appear to be active and lively members of the genomic regulatory community, deserving of the same level of scrutiny applied to other genic or regulatory features.

The vast majority of our genome — including repetitive DNA — is transcribed into RNA in nonrandom, cell-type-specific ways. These non-random processes strongly point against transcription being noise and provides strong evidence of function.

Indeed, individual RNA molecules then form networks in a cell, interacting with DNA, proteins, and other RNAs to control which genes are turned on and off, and which genes are expressed as proteins, thereby playing a crucial role in determining the cell’s type. As Stamatoyannopoulos puts it, this complex system exudes function:

More of the human genome sequence appears to be used for some reproducible, biochemically defined activity than was previously imagined. Contrary to the initial expectations of many, the overwhelming majority of these activities appear to be state-specific — either restricted to specific cell types or lineages, or evocable in response to a stimulus. … [B]iochemical signatures of many ENCODE-defined elements exhibit complex trans-cellular patterns of activity. … Together, these observations suggest that the genome may, in fact, be extensively multiply encoded — i.e., that the same DNA element gives rise to different activities in different cell types.

These consistent and predictable cell-type-specific patterns of RNA expression, and stimulus-specific patterns of transcription, show that mass genomic transcription of DNA into RNA is not random, but has important functional purposes.

In other words, transcription isn’t random — it happens over and over again in predictable patterns — and there’s a lot more non-random transcription going on than what you would expect from an evolutionary view of “constrained regions” of the genome. We may not yet have definitive proof of what every genomic element that’s transcribed is doing, but this evidence tells you that ENCODE’s data shows real evidence of function. Organized, reproducible, predictable transcription across the vast majority of the genome is far more compatible with the mass-functionality view of the genome, and highly incompatible with the “transcriptional noise” view promoted by junk DNA defenders. And the Kellis et al. authors think this biochemical evidence of function is more important than evolutionary predictions.

Bottom line: As for the rest of what Creation Myths has to say, there’s really not much there. Our video backs up what it says with clear quotes and references. We’ve provided more documentation here. Creation Myths, in contrast, is inventing quotes. Whether you celebrate Easter or not, may you enjoy this weekend and spend it quoting people carefully and accurately — not inventing quotes to suit your own purposes.