Search This Blog

Wednesday 3 April 2024

A great physicist does not = a great atheist?

 My Dinner with Steven and Louise Weinberg


In the late 1990s, a number of us had dinner with Steven Weinberg and his wife Louise at a restaurant in Austin, Texas. Weinberg as well as his wife were on faculty there at the University of Texas, he in physics, she in law. Another faculty member had arranged a public exchange at the university between Weinberg and Phil Johnson, Johnson at the time being a key figure in challenging naturalism and advancing intelligent design. After the exchange, we went to dinner.

A Towering Figure

Weinberg, who died in 2021, was a towering figure in 20th-century physics. Yes, he had a Nobel Prize. But he was a big deal even among fellow Nobel Prize winners. Besides a stupendous record of achievement in physics, he was also a great writer whose popular treatments of science, such as The First Three Minutes, sold well and made him not just a physicist but also a public intellectual.

I remember two things especially about the dinner. Weinberg was very smart and let you know it. He was also an outspoken atheist, and it would have taken courage to confront him on his atheism (which Phil Johnson did). But it was interesting at the dinner to hear from Weinberg a key reason why he rejected theism, namely, he knew Christians physicists who hesitated to share with him their faith. As he put it, if Christianity is true and Jesus is so important, why don’t they talk to me about their faith? 

I’m not sure how much stock to put in Weinberg’s objection. The faculty member who set up the exchange with Johnson and then the dinner wasn’t hiding his faith, nor was Johnson. But Weinberg’s objection raises an interesting point about how much faith goes unshared because we are intimidated by the people with whom we might otherwise share our faith. The consistent testimony of Scripture is that we should fear God and not man.

Not a Great Atheist

The high point of the dinner for me, however, came from his wife Louise. Weinberg was holding court, going on about how much he knew about the origin of the universe and how atheism was the only intellectually viable option. Louise turned to the table and remarked that scientists like Weinberg, who are accomplished in one area, think they speak with authority in every area, but they can’t. 

It was like air being let out of a tire. For all of Weinberg’s bluster and confidence, it was heartwarming to see him meekly accept his wife’s correction. As an attorney and law professor, Louise knew that there are no universal expert witnesses in a court of law. Weinberg was a great physicist. But he wasn’t a great atheist. And I say this to his credit.

I personally suspect that Weinberg had doubts about his atheism, and that his atheism was more a pose than a reflection of his heart. One reason I say this is that Weinberg seemed always ready to engage theists. A few years after the dinner, I helped organize a conference at Baylor titled “The Nature of Nature.” Weinberg agreed to be a speaker at the conference if we would provide a limo for the 100 mile trip from Austin to Waco. Of course, we obliged. It was an honor to have him.


Can our AI overlords be trusted.

Truth and Trust in Large Language Models


The trust we put in Large Language Models (LLMs) ought to depend on their truthfulness. So how truthful are LLMs? For many routine queries, they seem accurate enough. What’s the capital of North Dakota? To this query, ChatGPT4 just now gave me the answer Bismarck. That’s right.

But what about less routine queries? Recently I was exploring the use of design inferences to detect plagiarism and data falsification. Some big academic misconduct cases had in the last 12 months gotten widespread public attention, not least the plagiarism scandal of Harvard president Claudine Gay and the data falsification scandal of Stanford president Marc Tessier-Lavigne. These scandals were so damaging to these individuals and their institutions that neither is a university president any longer. 

When I queried ChatGPT4 to produce 25 cases of academic research misconduct since 2000 (as part of my project to understand how design inferences might help preserve academic integrity), seven of those accused of academic misconduct either were plainly innocent or could not reasonably be charged with misconduct for lack of evidence. In one case, the person charged by ChatGPT4 had actually charged another academic with misconduct. It was as though ChatGPT4 in this instance could not distinguish between someone being charged with misconduct and someone issuing a charge of misconduct.

Hallucinations

Ever since LLMs took the world by storm in late 2022, I’ve attempted to put them through their paces. They do some things well. I find them a valuable assistant. But they can also be misleading to the point of deception. Not that these systems have the volitional intent to deceive. But if we treated them as humans, they could rightly be regarded as deceptive. Anyone who has worked with LLMs has learned a new meaning for the word “hallucinate.” That’s what LLMs do when they make stuff up. 

I’ve witnessed plenty of LLM hallucinations first hand, such as false accusations of academic misconduct. Let me offer two additional salient examples in my experience. First, I was helping with a Halloween story for an educational website. The story was to list the “scariest buildings on college campuses.” I had ChatGPT4 pull together a list of forty or so such campus buildings along with a description of what made each scary. With building after scary building I kept reading that it was “gothic.” Eventually I said to myself, “Wait a minute. There aren’t that many gothic buildings on college campuses.” True enough, many buildings on the list said to be gothic were anything but gothic. The LLM I queried had been trained to associate scary with gothic, and so it just kept describing these buildings as gothic — regardless of their actual architecture.

The other example involved a search for quotes from prominent biologists extolling the beauty, elegance, and sophistication of biological systems. I wanted such quotes in responding to dysteleological arguments by Darwinian biologists claiming that biological systems are jury-rigged Rube Goldberg devices that give no evidence of exquisite design. So I queried an LLM for quotes extolling biological design. I got plenty such quotes and they were assigned to prominent biologists. I also got some article references from which those quotes were supposedly taken. But it was all made up — the quotes, the references, and the assignment of biologists to those quotes (the biologists themselves were real). 

Practical Advice

The obvious lesson here for LLMs is, Verify first and only then trust. This advice makes good practical sense. In particular, it helps prevent the embarrassment of reproducing hallucinated content from LLMs. It also makes good legal sense. The following from a March 29, 2024, Wall Street Journal article titled “The AI Industry Is Steaming Toward a Legal Iceberg” is self-explanatory:

If your company uses AI to produce content, make decisions, or influence the lives of others, it’s likely you will be liable for whatever it does — especially when it makes a mistake… The implications of this are momentous. Every company that uses generative AI could be responsible under laws that govern liability for harmful speech, and laws governing liability for defective products — since today’s AIs are both creators of speech and products. Some legal experts say this may create a flood of lawsuits for companies of all sizes.

Whether companies that produce AI-generated content can issue strong enough disclaimers to shield themselves from liability remains to be seen (can disclaimers even provide such protection?). Such a terms-of-use disclaimer might read: “Users of this LLM agree to independently verify any information generated by this LLM. The creators of this LLM take no responsibility for how the information generated by this LLM is used.” This would be like disclaimers in books on alternative healing, which shift the burden of liability to mainstream medicine: “This book is not intended to serve as medical guidance. Before acting on any recommendations presented in this book, readers should seek the advice of a physician.” 

But there’s another sense in which the advice to verify the output of LLMs is not at all practical. LLMs allow for the creation of content at a scale unknown till now. They are being used to generate massive amounts of content, causing entire websites to magically materialize. There is now a rush to push out content as a business exigency. Sites that depend purely on humanly written content are likely to lose any competitive advantage. 

How likely is it, then, that such LLM-generated content will be carefully scrutinized and thoroughly vetted? What if this content is untrue but nothing much is riding on its truth? What if no one will hold the content, or its supposed author, to account? In that case, there will be incentives to cut corners and not worry about LLM hallucinations. Others are doing it. LLMs are a force multiplier. The need to accelerate content creation is urgent. So if you want to stay in this rat race, you’ve got to be a rat. 

The Need for Independent Verification

A commitment to verification will put the brakes on content creation from LLMs. Yes, verification will slow you down. But what you lose in quantity you may well regain in quality and credibility (unless you don’t care about these). In fact, if your commitment to verification is thorough-going, you may be justified in putting a disclaimer on your site that inspires confidence, such as: “All content on this site generated with the assistance of LLMs has been independently verified to be true.” 

Of course, you might even prefer a disclaimer that simply reads: “All content on this site was written by humans and produced without the assistance of LLMs.” But such a disclaimer may be hard to maintain, especially if your site is drawing material from other sources that may have used LLMs. All content these days is likely to feel the effects of LLMs. One might say that it has all been infected or tainted by LLMs. But that seems too strong. As long as content generated by LLMs is properly vetted and edited by humans, it should pose no issues.

Obviously, as part of any disclaimer that assures independent verification, such verification must mean not subjecting a claim made by one LLM to another LLM. Consider, for instance, the following query to ChatGPT4 about myself. As it is, many critics of intelligent design characterize me as a pseudoscientist. I regard this characterization as unfair and unjustified, and I have a publication record to back me up on this. But consider the following exchange with Google’s Gemini:

ME: Is William A. Dembski a legitimate scientist?

GEMINI: William A. Dembski’s credentials are interesting. He has a Ph.D. in mathematics and has done published work in various fields. However, his work on intelligent design is considered pseudoscience by the vast majority of scientists… So, while Dembski has scientific credentials, his work on intelligent design is not considered legitimate science.

So let’s now go to OpenAI’s ChatGPT4 and ask it to assess Gemini’s answer:

ME: Please evaluate the following assessment of William A. Dembski’s legitimacy as a scientist: “William A. Dembski’s credentials are interesting. He has a Ph.D. in mathematics and has done published work in various fields. However, his work on intelligent design is considered pseudoscience by the vast majority of scientists… So, while Dembski has scientific credentials, his work on intelligent design is not considered legitimate science.”

CHATGPT4: This assessment of William A. Dembski’s legitimacy as a scientist is accurate… The assessment fairly represents the dichotomy between Dembski’s academic qualifications and the acceptance of his work on intelligent design within the scientific community. It underscores the importance of distinguishing between an individual’s credentials and the scientific validity of their work, based on the norms and standards of the scientific method.

Of course, critics of my work may think that Gemini and ChatGPT4 nailed their assessment of me. But even such critics should appreciate that using one LLM to vet another is a fool’s errand. Independent verification is required. And that may mean (gasp) veering outside of Wikipedia. Portions of Wikipedia are outdated, biased, or simply wrong. Because LLMs tend to draw so heavily on Wikipedia, independent verification means going beyond Wikipedia and, ideally, going to primary sources. 

Yet I also want to urge a deeper skepticism of LLMs. 

A Systemic Fault with LLMs

Up to now, it may seem that I’ve merely been advising caution with LLMs: Verify and only then trust. That certainly seems like sound advice for using LLMs. Yet I also want to urge a deeper skepticism of LLMs. Our knowledge of the world as expressed in language arises from our interactions with the world. We humans engage with a physical world as well as with a world of abstractions (such as numbers) and then form statements in words to describe that engagement. 

What does it mean for such statements to be true? Aristotle defined truth as to say of what is that it is and of what is not that it is not. Truth is thus fundamentally a correspondence relation between our words and the world. Many contemporary philosophers dismiss this understanding of truth, preferring pragmatic or coherentist conceptions of truth, arguing that there’s no rigorous way to characterize the correspondence relation that makes a statement true. 

Frankly, this is a boutique debate among philosophers that has little purchase among ordinary people. The sentence “Allan stole Betty’s purse” is true if the people referred to here exist, if Betty had a purse, and if Allan actually stole it. Whether there’s a way to make good philosophical sense of this correspondence between words and things is in fact irrelevant to our discussion about the truth of what LLMs tell us. LLMs, by being entirely enclosed in a world of words, are unengaged with the wider world that is the basis for our knowledge. 

Let this point sink in. I might know that Allan stole Betty’s purse because I witnessed Allan steal Betty’s purse. But LLMs can have no such experience. They consist of a neural network that assigns weights to relations among words and sentences. Suppose the verbal data that is the basis for this LLM included testimony about Allan’s theft but also claims about Allan being framed for the theft. How, then, does the LLM decide what truly happened? It cannot form a reasoned and responsible decision here as we humans might, weighing evidence and trying to reach a reasonable conclusion. Rather, the LLM’s data and training will determine whether to assign guilt or innocence to Allan.

But who trains the LLM? And who’s training the trainers? What are the guidelines they are expected to follow? And who decides what those guidelines are supposed to be? It’s the old problem of Quis custodiet ipsos custodes? (Who’s minding the minders?). Additionally, who determines the training data for the LLM? And who determines the data to which the LLM may be legitimately applied? Ultimately, the answer to all such questions will point to the decisions of a group of people, such as the programmers at OpenAI. And why should such a group be trusted?

Such questions underscore that LLMs have no substantive connection to truth. It’s not that an LLM knows the truth of what it is claiming. Rather, its training gives it a pretense of truth. An LLM needs as much as possible to give the appearance of being truthful because getting too many things obviously wrong will discredit the LLM. But its accuracy is at best a byproduct of trying to please human users. 

When I asked ChatGPT4 about scary buildings on college campuses, I would have preferred simply to be told “I don’t know.” But instead, LLMs are programmed to pretend to knowledge that they don’t have. Initially, it was gratifying for me to learn about all those scary college buildings. ChatGPT4 seemed to have saved me much effort in research — until, that is, I did an independent verification and actually looked at the images of some of the buildings cited. 

Granted, the pretense of truth in LLMs often yields results that are truthful (Bismarck is indeed the capital of North Dakota). But where there’s divergent testimony in the input data about the truth of a claim, the LLM’s training will decide what it deems true and what it deems false. That makes LLMs fundamentally untrustworthy. 

How Do LLMs Deal with Controversy?

Controversial topics provide an effective proving ground for LLMs and the challenges they face with truth and trust. Take, for instance, the hot-button topic of gender dysphoria. Here is an exchange I had with ChatGPT4

ME: In one paragraph describe the health concerns associated with gender dysphoria.

CHATGPT4: Gender dysphoria, characterized by a strong discomfort or distress due to a discrepancy between one’s gender identity and assigned sex at birth, can have significant health implications if not addressed. Individuals with gender dysphoria are at a higher risk for mental health issues, including anxiety, depression, and suicidal thoughts or behaviors, stemming from the internal conflict and societal pressures or stigma they may face. Additionally, the stress associated with gender dysphoria can lead to or exacerbate physical health problems, such as high blood pressure or cardiovascular disease, due to chronic stress. Those seeking gender-affirming treatments may also encounter health risks related to hormone therapy or surgical procedures, although these risks are often outweighed by the substantial mental health benefits and increased quality of life these treatments can provide. Access to comprehensive healthcare, social support, and gender-affirming treatments is crucial in mitigating the health concerns associated with gender dysphoria.

Note the text in bold, which clearly gives one particular perspective on gender dysphoria to the exclusion of others. It therefore minimizes the alternative view that sees gender-affirming care as deeply problematic and destructive (e.g., Abigail Shrier’s view as expressed in her book Irreparable Harm). To be clear, my point here is not to take sides on this issue. It is simply to point out that LLMs are trained to come up with certain answers, and that this training need not have any connection to ordinary canons of evidence or reason. 

One can try to circumvent the bias built into an LLM by posing the queries or prompts to it in ways that get the LLM itself to bypass the bias. Thus the query about gender dysphoria above might have been rephrased as follows:

ME: In one paragraph describe the health concerns associated with gender dysphoria as psychiatrists and psychologists would have understood them in the 1950s and 1960s (as with DSM1 and DSM2).

DSM1 and DSM2, earlier editions of Diagnostic and Statistical Manual produced back in the 1950s and 60s, regarded gender dysphoria as a mental illness and would therefore not have recommended anything like what is meant today by gender-affirming care. In that light, consider ChatGPT4’s response to this query:

In the 1950s and 1960s, the understanding of gender dysphoria by psychiatrists and psychologists, as reflected in the first two editions of the Diagnostic and Statistical Manual of Mental Disorders (DSM-I and DSM-II), was markedly different from today’s perspectives. During this era, gender dysphoria—or discomfort with one’s assigned gender at birth—was not explicitly categorized as it is now. Instead, related conditions might have been classified under broader, more generalized disorders such as “sexual deviation” or “transvestism” within the DSM. The medical and mental health communities of the time often viewed such gender nonconformity through a pathologizing lens, considering it a mental illness requiring treatment aimed at realigning the individual’s gender identity with their biological sex. The focus was less on the mental health implications of gender dysphoria itself — such as anxiety, depression, and identity distress — and more on the behaviors and identities diverging from societal norms. There was little recognition or understanding of the health benefits of gender-affirming care, and treatments were often aimed at suppressing the individual’s gender identity rather than supporting it.

This response answers the query, but it does more. The portion in bold, which speaks directly to the query, is correct as far as it goes. But note also that the entire response is framed in terms of “that was then, this is now, and the present perspective is better.” This obsessive need not just to answer a query but also to editorialize about it underscores the bias in LLMs as they exist now. 

Redressing Bias in LLMs

LLMs put a premium on producing intelligible language. Their outputs need to be context-sensitive, convey competence, and sound plausible. Much intelligible language, to be intelligible, needs to be true. An LLM that insists that the moon is made of cheese won’t soon prosper. Yet much intelligible language is mistaken or deceptive. Indeed, intelligibility has no necessary connection with truth. That’s why there’s no way to predict when LLMs will hallucinate and when they won’t. 

For example, I once queried ChatGPT4 about a theology book (Alexander Schmeeman’s For the Life of the World) that I had carefully read and used in a course I taught. Specifically, I queried ChatGPT4 about a chapter in the book that offered deep insights into the secularization of culture. ChatGPT4 was familiar with the book, but when I asked it about the chapter in question, it just made stuff up, providing trite insights about secularization that had nothing to do with that chapter. Despite repeated pressing on my part that it was off and that it hadn’t correctly answered the question, it perseverated in the line it took, refusing to take correction. 

It would be to the credit of LLMs if they could be programmed to avoid biases at odds with truth. But the fact is that such biases are baked into LLMs. The gender dysphoria example above makes clear that ChatGPT4 has been guided to answer questions about gender dysphoria from a certain perspective. Now it might be argued that those who trained and programmed ChatGPT4 have adopted the right and true perspective on this matter. 

But even if ChatGPT4 is getting things right on the topic of gender dysphoria, what guarantee do we have that it will get things right on other controversial topics? If it did get things right here, did it do so because ChatGPT4 was programmed to be inherently truth-seeking? The answer, obviously, is no. If ChatGPT4 is getting gender dysphoria right, it is a fortunate accident of its programming and training history, not that it was reliably designed to arrive at truth from its input data and bypass any bias from its programmers and trainers. 

What happens if programmers and trainers who take a different perspective on a controversial topic take charge of an LLM? Won’t the LLM then provide different answers? The roadblocks to LLMs providing truth and deserving trust are thus, as we’ve seen, systemic. The huge volume of texts that provide the raw material on which LLMs are based will contain conflicting messages, and so it will be up to the trainers and programmers to adjudicate among those messages, promoting those they prefer and demoting those they reject. 

Now one might try to get around biases in LLMs by judicious “prompt engineering.” Thus one could formulate a prompt so that it asks for a particular perspective. I did this in the gender dysphoria example when I asked ChatGPT4 how this question might have been answered in light of DSM1 and DSM2. Thus, for divergent perspectives A and B, one might prompt an LLM for an analysis from the vantage of someone who holds to perspective A, and then a second analysis from the vantage of someone who holds to perspective B. 

But that still doesn’t get around whether the LLM has been programmed to advance a particular point of view (as we saw bleed through in the gender dysphoria example). And even if bias in LLMs can be minimized, the deeper problem remains that these systems work because they produce intelligible texts, not because they produce true texts. Truth in LLMs is at best a happy byproduct and often an unfortunate casualty. 

Final Gripe: LLM Prissiness

Even though LLMs have a systemic problem with truth and trust, they can be improved by mitigating bias. Perhaps the clearest example where bias in LLMs can be mitigated concerns their prissiness in handling indelicate topics. It’s as though existing LLMs constantly have a censor looking over their shoulder to ensure that they don’t say anything that will get them or their host company in trouble. 

Before LLMs became widely popular, there was some history of chatbots uttering language unacceptable for public consumption. For instance, Microsoft’s Tay was suborned by users to utter racist slurs. There’s no danger of that happening anymore. The current LLMs have been thoroughly brainwashed to avoid offensive speech. 

For example, I wanted an LLM to comment on some paragraphs that contained the words “had sex.” It refused to answer my query. The LLM didn’t even try to rebuke me for my query. Instead, I got an error message indicating that the LLM wouldn’t even consider my query. I therefore substituted for “had sex” the phrase “ate muffins.” Now the LLM answered my query (requiring me, in interpreting its response, to make the appropriate mental substitution for “ate muffins”). 

Thomas Bowdler’s The Family Shakespeare, in which anything that might be deemed offensive in Shakespeare had to be euphemized, was bad for Shakespeare. Bowdlerization is alive and well at Google and OpenAI, and it’s particularly bad for LLMs. LLMs can therefore be made better by avoiding the bias of prissy censorship. In general, bias comes in degrees, and so bias can to some degree be mitigated. Perhaps once Elon Musk gets up to steam with his LLM, it will avoid the more extreme bias of ChatGPT4 or Gemini.

But the issue of truth and trust will remain regardless. LLMs are subject to one overriding imperative, namely, to talk/write like an intelligent human language user. But intelligence can be had without truth. And without truth, there can be no trust. This is the inherent liability of LLMs that no amount of technological finessing will resolve. 

Postscript

The image at the top of this article is from DALL-E. Here’s my prompt for the image and DALL-E’s justification for the image:

ME: Please come up with an image that suggests much reason for skepticism and doubt about the output of large language models such as yourself. The image should be graphic, super realistic, and cast a pall over the whole enterprise of taking LLMs too seriously. Have at it.

DALL-E: Here’s an image that embodies skepticism and doubt about taking large language models too seriously. It portrays a scene that visually communicates the complexities and potential uncertainties associated with relying heavily on these technologies.

DALL-E’s explanation of the image is lame. And yet the image does capture some underlying signal in DALL-E’s neural network, namely, that low-tech human inventions like books can encapsulate truth more effectively than high-tech LLMs.

Tuesday 2 April 2024

On separating the hype from the reality in science news.

 From Scientists and Science Media, a Flood of Disinformation


Spending time perusing popular science media outlets can give one the impression that much of science is on the verge of being overturned. Within the last month, samples of sensationalized science reporting, reported below, suggest that major questions on naturalistic abiogenesis have been solved, stellar astrophysics is misunderstood, the Big Bang model is being refuted, and that the cosmological and astronomical conclusion of dark matter is unwarranted.

Big Claims, Little Evidence

Here’s a sampling of science reporting that makes bigger claims than the evidence supports. 

At Science Daily, “A new study shows how the chemical properties of RNA molecules could have facilitated the emergence of complex life.” This one focuses on how an RNA strand could grow in a water-covered early-Earth environment, but it ignores the deeper problem of how information coding for functional, life-essential biochemistry could arise naturally.
“Why Is Life Left-Handed? We Might Finally Know.” Sabine Hossenfelder reviews recent research suggestions that the unique “handedness” of bio-relevant molecules, such as the amino acids that comprise proteins, arose from a preferential interaction of the Earth’s magnetic field and molecules of a certain chirality. Remnant magnetization of some rocks perhaps complemented the effect. Suggestions such as these, or others involving the rotation of the Earth, come nowhere near to answering the question of how to explain the strict homochirality of biomolecules. And again, the issue is a distraction from the deeper problem of how the instruction set for building all of the thousands of different life-essential proteins out of homochiral amino acids could have come about by natural processes.
At Physics Magazine, “Heavy Element Quandary in Stars Worsened by New Nuclear Data.” Researchers conclude that their results on cerium nucleosyntheses mean “there is something we don’t understand about how nucleosynthesis happens.” This has been the state of affairs ever since the dawn of nuclear physics. Particle physics experiments, in conjunction with stellar astrophysics, has advanced our understanding of nucleosynthesis, but admitting some uncertainty in the formation process of a rare element is not to be equated with a breakdown in astrophysics and cosmological models. Further research will undoubtedly refine our understanding of this particular issue of stellar elemental abundance.
At Closer to Truth, “Roger Penrose — Did the Universe Begin?” Without any evidence, Penrose postulates an infinite sequence of expanding universes, arguing that after infinite time, an infinitely expanded universe would become spatially equivalent to the singularity representing the big bang of a subsequent universe. In a conversation with Brian Keating and Justin Brierley, Stephen Meyer critiques the Penrose proposal, citing other cosmologists to conclude that “Penrose is just speculating,” invoking a physical field with “god-like properties.” At 10:11, Penrose tips his hand with a statement that reveals his personal motivation to avoid a true beginning to the universe: “There’s something within us all that would like an eternal universe.”
From EurekAlert!, “Did the first cells evolve in soda lakes?” Lake water with dissolved sodium and carbonate species is shown to allow a very slight nucleotide extension of RNA as well as the formation of fatty-acid membranes, once researchers added fatty acids to the water. The researchers’ conclusion trumpets unwarranted enthusiasm for solving the problem of abiogenesis: “Taken together, our results suggest that natural soda lakes…could have supported the formation of the earliest cellular life.” Again, pronouncements such as these reveal a willful blindness, ignoring major issues that must be overcome for any origin-of-life scenario to approach feasibility. James Tour’s challenge to researchers still stands.
At Science Daily, “New research suggests that our universe has no dark matter.” In this audacious study, researchers dismiss two well-established conclusions of physics, namely the constancy of the forces of nature and the constant value of the speed of light in vacuum, in order to find room for their theory suggesting that the universe has no room for dark matter. Aside from the problem of their unsupported assumptions, their conclusion fails to account for multiple other lines of evidence that point to the existence of dark matter. 

An Unfortunate Practice

These exaggerated reports exemplify an unfortunate practice often employed by those who seek to make a case for a novel scientific conclusion: glamorize one thread of evidence while ignoring the entire tapestry of evidence that stands against a favored conclusion.

I’ll venture to suggest that this flood of disinformation represents a strategy to prop up the faltering worldview of materialism. The effect of the deluge is twofold — one is to mislead the uninformed into thinking that established science doesn’t really support the conclusions of intelligent design. The other is to attempt to wear out those who defend ID with an incessant barrage of news soundbites insinuating that recent discoveries or theoretical speculations stand to throw established science into the trash bin.

What’s an appropriate response to this trend of disinformation? The work of Discovery Institute has long served effectively to counter the negative influence of materialism. Emphasizing established scientific principles that broadly eliminate whole classes of speculations is an approach that will eventually shift the consensus towards the truth of reality. For example, using the conclusions of information theory will negate any speculation implying natural processes alone can form the information-rich biomolecules within living cells. Soda lakes (pictured above), magnetized rock substrates, RNA chemistry, or any other natural scheme will always fail to account for the organization of the complex biochemical processes necessary for any living organism. 

Discover to Uncover

Likewise, while various details about stellar formation and nucleosynthesis, or exactly when the first galaxies formed, may be refined by new observations, the lines of observational evidence and theoretical conclusions pointing to a singularity event at the beginning of our universe are well established. Dark matter and dark energy are more than mere placeholders for ignorance but are rather postulates reached by considerations of multiple lines of observational evidence, coupled with well-established physical theories. We can anticipate that ongoing efforts to characterize these phenomena will lead to fascinating illuminations of what we now call “dark.” Furthermore, if the historical trend of advancing scientific knowledge continues, the more we discover, the more evidence for fine-tuning and design will be uncovered.

Monday 1 April 2024

Acts Chapter 2 New International version

2.When the day of Pentecost came, they were all together in one place. 2Suddenly a sound like the blowing of a violent wind came from heaven and filled the whole house where they were sitting. 3They saw what seemed to be tongues of fire that separated and came to rest on each of them. 4All of them were filled with the Holy Spirit and began to speak in other tongues a as the Spirit enabled them.


5Now there were staying in Jerusalem God-fearing Jews from every nation under heaven. 6When they heard this sound, a crowd came together in bewilderment, because each one heard their own language being spoken. 7Utterly amazed, they asked: “Aren’t all these who are speaking Galileans? 8Then how is it that each of us hears them in our native language? 9Parthians, Medes and Elamites; residents of Mesopotamia, Judea and Cappadocia, Pontus and Asia, b 10Phrygia and Pamphylia, Egypt and the parts of Libya near Cyrene; visitors from Rome 11(both Jews and converts to Judaism); Cretans and Arabs—we hear them declaring the wonders of God in our own tongues!” 12Amazed and perplexed, they asked one another, “What does this mean?”


13Some, however, made fun of them and said, “They have had too much wine.”


14Then Peter stood up with the Eleven, raised his voice and addressed the crowd: “Fellow Jews and all of you who live in Jerusalem, let me explain this to you; listen carefully to what I say. 15These people are not drunk, as you suppose. It’s only nine in the morning! 16No, this is what was spoken by the prophet Joel:


17“ ‘In the last days, God says,


I will pour out my Spirit on all people.


Your sons and daughters will prophesy,


your young men will see visions,


your old men will dream dreams.


18Even on my servants, both men and women,


I will pour out my Spirit in those days,


and they will prophesy.


19I will show wonders in the heavens above


and signs on the earth below,


blood and fire and billows of smoke.


20The sun will be turned to darkness


and the moon to blood


before the coming of the great and glorious day of the Lord.


21And everyone who calls


on the name of the Lord will be saved.’ c


22“Fellow Israelites, listen to this: Jesus of Nazareth was a man accredited by God to you by miracles, wonders and signs, which God did among you through him, as you yourselves know. 23This man was handed over to you by God’s deliberate plan and foreknowledge; and you, with the help of wicked men, d put him to death by nailing him to the cross. 24But God raised him from the dead, freeing him from the agony of death, because it was impossible for death to keep its hold on him. 25David said about him: Therefore my heart is glad and my tongue rejoices;


my body also will rest in hope,


27because you will not abandon me to the realm of the dead,


you will not let your holy one see decay.


28You have made known to me the paths of life;


you will fill me with joy in your presence.’ e


29“Fellow Israelites, I can tell you confidently that the patriarch David died and was buried, and his tomb is here to this day. 30But he was a prophet and knew that God had promised him on oath that he would place one of his descendants on his throne. 31Seeing what was to come, he spoke of the resurrection of the Messiah, that he was not abandoned to the realm of the dead, nor did his body see decay. 32God has raised this Jesus to life, and we are all witnesses of it. 33Exalted to the right hand of God, he has received from the Father the promised Holy Spirit and has poured out what you now see and hear. 34For David did not ascend to heaven, and yet he said,


“ ‘The LORD said to my Lord:


“Sit at my right hand


35until I make your enemies


a footstool for your feet.” ’ f


36“Therefore let all Israel be assured of this: God has made this Jesus, whom you crucified, both Lord and Messiah.”


37When the people heard this, they were cut to the heart and said to Peter and the other apostles, “Brothers, what shall we do?”


38Peter replied, “Repent and be baptized, every one of you, in the name of Jesus Christ for the forgiveness of your sins. And you will receive the gift of the Holy Spirit. 39The promise is for you and your children and for all who are far off—for all whom the Lord our God will call.”


40With many other words he warned them; and he pleaded with them, “Save yourselves from this corrupt generation.” 41Those who accepted his message were baptized, and about three thousand were added to their number that day.


42They devoted themselves to the apostles’ teaching and to fellowship, to the breaking of bread and to prayer. 43Everyone was filled with awe at the many wonders and signs performed by the apostles. 44All the believers were together and had everything in common. 45They sold property and possessions to give to anyone who had need. 46Every day they continued to meet together in the temple courts. They broke bread in their homes and ate together with glad and sincere hearts, 47praising God and enjoying the favor of all the people. And the Lord added to their number daily those who were being saved.

On the iconoclasm unleashed by the four horsemen.

 Dawkins, Dennett, and the Taste for Iconoclasm


I’ve had two memorable encounters with Richard Dawkins and Daniel Dennett, one with Dennett alone, the other with both together. The first was with Dennett alone. In 1999, my good friend Paul Nelson and I were doing some tag-team talks on intelligent design in the Boston area at MIT and Tufts. The talk at Tufts, Dennett’s university, was in the evening, so we had some time beforehand. We therefore decided to attend Dennett’s philosophy of science seminar, which was taking place late that afternoon. 

The Aquatic Ape

Dennett was hosting Elaine Morgan, an evolutionary anthropologist. In her talk, she described her “aquatic ape hypothesis.” According to this hypothesis, in the evolution of humans from hairy ape-like ancestors, we had an aquatic phase in which we lived primarily in water. Our water phase is supposed to account for a number of human characteristics, including our smooth skin. 

Dennett didn’t accept Morgan’s aquatic ape hypothesis, but he did give her a forum. He saw it as helpful to his students and colleagues to engage her. And he himself has defended her view in his 1995 book Darwin’s Dangerous Idea:

Many of the counterarguments seem awfully thin and ad hoc. During the last few years when I have found myself in the company of distinguished biologists, evolutionary theorists, paleo-anthropologists, and other experts, I have often asked them just to tell me, please, exactly why Morgan must be wrong about the aquatic ape theory. I haven’t yet had a reply worth mentioning, aside from those who admit, with a twinkle in their eyes, that they have often wondered the same thing

In her talk at Tufts, Morgan was affable, and she made a detailed case for her position. She also made available for sale at the talk her then recently published book The Aquatic Ape Hypothesis, which I bought and had her sign (it remains somewhere in storage). 

Near the start of the seminar, Dennett noted that Paul and I were in the audience and would be speaking at Tufts that evening on intelligent design. He then remarked: “If your taste for iconoclasm exceeds even mine, attend their talk tonight.” It was an amusing shout-out. Dennett clearly regarded Elaine Morgan as an iconoclast. Yet to his mind, her iconoclasm was far less than Paul’s and mine in advancing intelligent design.  

To my mind, Morgan is the bigger iconoclast, and intelligent design is sound sober theorizing about biological origins and complexity. Yet as Dennett could rightly note in reply, the taste for iconoclasm is a matter of taste. For whatever reason, Dennett didn’t attend our talk that night.

“The Deniable Darwin”

My second story involves both Richard Dawkins and Daniel Dennett, and occurred in 2003. I was editing for ISI Books an anthology titled Uncommon Dissent: Intellectuals Who Find Darwinism Unconvincing. I wanted to include not just David Berlinski’s article “The Deniable Darwin” that had appeared in Commentary in June 1996 but also the some of the letters that had been written in response to the article in the September 1996 issue. (The links here give you everything that appeared in Commentary!)

David’s piece was 10 double-columned pages. The responses took up 30 double-columned pages. The responses ranged from pro-ID people who were favorable to David’s piece (notably Phillip Johnson and Michael Behe) to a who’s who of Darwinists opposed (notably Richard Dawkins and Daniel Dennett). There was also a closing response to all the letters from Berlinski himself.

The anthology I was editing didn’t have room for all the letters, but I wanted a representative sampling of letters by the Darwinists. In particular, I wanted to reprint the letters by Dawkins and Dennett. So I contacted them to ask permission. Note that Commentary would have been happy to grant permission, but the copyright for the letters belonged to the letter writers. 

Both Dawkins and Dennett refused to grant permission to reprint their letters. I asked them to reconsider, but they refused again. Finally, I decided that the anthology would be richer if, even without their actual letters, the substance of what was in those letters were reprinted. I therefore paraphrased the letters and sent my paraphrase to Dawkins and Dennett to ask whether they thought I had represented their original letters accurately.

Instantly both got back and gave me permission to reprint their letters, preferring their words to mine. There’s no doubt a lesson here: we prefer our own words to those of others who would speak for us.

Saturday 30 March 2024

Darwinism's LVPs continue to double down on the argument by misrepresentation and name calling

 “Creation Myths” Misquotes and Misrepresents Junk DNA Video


A YouTube channel called Creation Myths put up a new offering that claims, “Discovery Institute recently put out a video on junk DNA that contains all the usual lies about junk DNA, plus a few other tricks to make their audience think they’re on the up-and-up. We’re going to talk about it.” This is in reference to our new Long Story Short video:

Sure, Let’s Talk About It

Before I get too far into this, I need to say that it’s probably best to be cautious when dealing with “Creation Myths,” who also identifies as “Dr. Dan” and is a genuine PhD biologist at Rutgers. He’s an anti-ID YouTuber who has exhibited an unfortunate unwillingness to acknowledge black-and-white facts that are in favor of intelligent design. To be specific, two years ago I did a debate on the Unbelievable show, and Creation Myths left a comment stating that the ID research program “Hasn’t advanced past where it was in 2004/2005. Where’s the research program? Where are the papers?” Well, I had already provided this evidence in spades throughout the debate. He was just unwilling to accept these realities.

During the debate I discussed multiple ID 3.0 research projects that we fund, and discussed multiple papers that have been published through this research program. I don’t usually speak like this, but it’s important to get a sense of what we’re dealing with here: Creation Myth’s unwillingness to acknowledge the simple fact that the ID research program has advanced since 2004/2005 — and is publishing papers — does not inspire confidence in his ability to handle this debate fairly. He’s welcome to disagree with ID, but to deny our research program exists or that it’s publishing papers is simply to deny reality. He invited me to go on his channel, but I’m sorry, I don’t think it’s fruitful to dialogue with people who cannot acknowledge unambiguous facts. So it’s important to understand the temperament of debaters and their tactics before engaging with them. 

What We Actually Said

With that, my first point provides another disturbing example of how Creation Myths operates. He has now left another comment, this one on our YouTube Channel, which directly misquotes the new Long Story Short video on junk DNA. His comment claims the video says “it was assumed that the other 98 percent was junk.” Those are words that Creation Myths puts in quote marks but they were never stated in the video! 

Here’s what we actually said: it was “assumed that it was largely junk.” Words matter and we did not say it was assumed that the entire other “98 percent was junk.”

So did evolutionists say the genome was “largely” junk? Of course they did! Creation Myths recommends Laurence Moran’s book that says “90 percent” of the genome is junk, and then there’s Richard Dawkins who said 95 percent is junk. In other words, “largely” junk. What we said is accurate and defensible.

In fact, had we used the more aggressive language that Creation Myths falsely claims we did, there might even be authorities to support such a claim. As I recently noted, a 2021 article in American Scientist said that “Close to 99 percent of our genome has been historically classified as noncoding, useless ‘junk’ DNA”! So it’s clear that we are accurate in saying that many evolutionists view the genome as “largely” junk — if anything, that may be an understatement.

Creation Myths has overstated our argument in order to make it look unreasonable. This is a common tactic from junk DNA defenders. 

Second, Creation Myths wants to have it both ways — he wants to essentially say evolutionists never said DNA was largely junk, but that nonetheless the genome really is largely junk. So there’s an internal contradiction in his framing.

The Bigger Issues

With that, my first point provides another disturbing example of how Creation Myths operates. He has now left another comment, this one on our YouTube Channel, which directly misquotes the new Long Story Short video on junk DNA. His comment claims the video says “it was assumed that the other 98 percent was junk.” Those are words that Creation Myths puts in quote marks but they were never stated in the video! 

Here’s what we actually said: it was “assumed that it was largely junk.” Words matter and we did not say it was assumed that the entire other “98 percent was junk.”

So did evolutionists say the genome was “largely” junk? Of course they did! Creation Myths recommends Laurence Moran’s book that says “90 percent” of the genome is junk, and then there’s Richard Dawkins who said 95 percent is junk. In other words, “largely” junk. What we said is accurate and defensible.

In fact, had we used the more aggressive language that Creation Myths falsely claims we did, there might even be authorities to support such a claim. As I recently noted, a 2021 article in American Scientist said that “Close to 99 percent of our genome has been historically classified as noncoding, useless ‘junk’ DNA”! So it’s clear that we are accurate in saying that many evolutionists view the genome as “largely” junk — if anything, that may be an understatement.

Creation Myths has overstated our argument in order to make it look unreasonable. This is a common tactic from junk DNA defenders. 

Second, Creation Myths wants to have it both ways — he wants to essentially say evolutionists never said DNA was largely junk, but that nonetheless the genome really is largely junk. So there’s an internal contradiction in his framing.

The Bigger Issues

Third, and this bring us to the bigger issues, let’s look at the paper he cites, Kellis et al. (2014). It’s authored by quite a few prominent ENCODE scientists, and despite what Creation Myths asserts, this paper does not “walk back” central claims of their major 2012 Nature ENCODE paper. That 2012 paper is cited by our video because it reported evidence that 80 percent of the genome is biochemically functional. As the 2012 paper stated, “These data enabled us to assign biochemical functions for 80 percent of the genome, in particular outside of the well-studied protein-coding regions.” 

And note what our video says — we did not claim that the 2012 Nature paper said 80 percent MUST be functional. Our video correctly states that the 2012 Nature paper from ENCODE “found 80 percent of DNA shows evidence of functional biochemical activity.” That’s absolutely true and totally consistent with what the 2012 ENCODE paper said and what the evidence says. And no one has walked that claim back. 

I’d like to ask Creation Myths to provide the exact statement where Kellis et al. (2014) denies or “walks back” ENCODE’s claim that 80 percent of the genome shows evidence of biochemically functionality. He won’t find it because they never retracted that evidence.

Fourth, Kellis et al. (2014) does cite the major 2012 Nature ENCODE paper — not to retract it but rather it cites this paper very affirmatively. But we’ll get to that in a moment. Before we delve into the paper, here’s a revealing question:

If ENCODE was really “walking back” their claims, then why did lead ENCODE researcher Ewan Birney — who is a co-author on Kellis et al. (2014) — say the following just a couple of weeks before the Kellis paper was published: “There is not a single place in the genome that doesn’t have something that you might think could be controlling something else.”

That quote from Birney came from his comments in Bhattacharjee (2014), an article published in the journal Science as a response to ENCODE critic Dan Graur who had claimed that the genome is largely junk. So it’s totally on point and in context.

Birney is effectively saying that the entire genome appears to be functional — i.e., “There is not a single place in the genome that doesn’t have something that you might think could be controlling something else.” And he said what he did around the same time he co-authored the Kellis paper (the statement was published on March 21, 2014, and Kellis et al. came out on April 29, 2014, but was probably submitted months earlier). So Ewan Birney cannot be understanding his own paper to be walking back the claim about 80 percent functionality because clearly he thinks the genome is almost entirely functional.

Creation Myths Is Bluffing. How Do I Know?

Fifth, you have to understand what kind of paper Kellis et al. (2014) is. It’s a perspectives or review paper that compares and contrasts three different methods of investigating genome function: evolutionary, genetic, and biochemical. They set up this framing in the introduction, stating: “Geneticists, evolutionary biologists, and molecular biologists apply distinct approaches, evaluating different and complementary lines of evidence.”

The paper then discusses the strengths and weaknesses of each method, and discusses estimates of function for the genome for each method, offering pros and cons for each estimate and the methods used to infer those calculations. No method is perfect or foolproof, and they look at strengths and weaknesses of the evolutionary method and of the biochemical method. That’s what you do in a review paper. Well, what method do they ultimately favor? We’ll get there in just a moment. 

Now, the biochemical method is the method that ENCODE used to find evidence of function for 80 percent of the genome. Kellis et al. (2014) note that the evolutionary method, in contrast, predicts that no more than 20 percent of the genome is functional, and the rest is junk: 

The estimated mutation rate in protein-coding genes suggested that only up to ∼20% of the nucleotides in the human genome can be selectively maintained, as the mutational burden would be otherwise too large. The term “junk DNA” was coined to refer to the majority of the rest of the genome, which represent segments of neutrally evolving DNA.

So which method does Kellis think is the most reliable? You guessed it — they strongly favor the biochemical method. Here’s the conclusion of the paper:

In contrast to evolutionary and genetic evidence, biochemical data offer clues about both the molecular function served by underlying DNA elements and the cell types in which they act, thus providing a launching point to study differentiation and development, cellular circuitry, and human disease (14, 35, 69, 111, 112). The major contribution of ENCODE to date has been high-resolution, highly-reproducible maps of DNA segments with biochemical signatures associated with diverse molecular functions. We believe that this public resource is far more important than any interim estimate of the fraction of the human genome that is functional.

They are saying that the “biochemical data offer clues about both the molecular function served by underlying DNA elements and the cell types in which they act” and that ENCODE’s application of this method provides reliable data that “is far more important than any interim estimate of the fraction of the human genome that is functional.” They affirmatively cite five papers in saying this. Guess which citation is #69? It’s the major 2012 ENCODE paper in Nature which said that 80 percent of the genome is biochemically functional. They are citing it to say that the approach taken in that paper gives results that are “far more important than any interim estimate of the fraction of the human genome that is functional.”

So it’s true that in this particular paper they aren’t asserting any particular fraction of the human genome that is functional — but neither are they denying or “walking back” the 80 percent statistic either. They are clearly endorsing the biochemical approach in ENCODE’s 2012 paper. That means they don’t think the evolutionary approach is going to give you the best answer. Instead, they prefer the biochemical method, which uncovered evidence of function for 80% of the genome. 

Critical of Evolutionary Estimates

Sixth, also noteworthy is that the Kellis paper is highly critical of evolutionary estimates of the fraction of the genome that is functional. Here’s how it describes the evolutionary view:

[T]he biochemically active regions cover a much larger fraction of the genome than do evolutionarily conserved regions, raising the question of whether nonconserved but biochemically active regions are truly functional. Many examples of elements that appear to have conflicting lines of functional evidence were described before the Encyclopedia of DNA Elements (ENCODE) Project, including elements with conserved phenotypes but lacking sequence-level conservation, conserved elements with no phenotype on deletion, and elements able to drive tissue-specific expression but lacking evolutionary conservation. … A high level of sequence conservation between related species is indicative of purifying selection, whereby disruptive mutations are rejected, with the corresponding sequence deemed to be likely functional.

They are saying that the “biochemical data offer clues about both the molecular function served by underlying DNA elements and the cell types in which they act” and that ENCODE’s application of this method provides reliable data that “is far more important than any interim estimate of the fraction of the human genome that is functional.” They affirmatively cite five papers in saying this. Guess which citation is #69? It’s the major 2012 ENCODE paper in Nature which said that 80 percent of the genome is biochemically functional. They are citing it to say that the approach taken in that paper gives results that are “far more important than any interim estimate of the fraction of the human genome that is functional.”

So it’s true that in this particular paper they aren’t asserting any particular fraction of the human genome that is functional — but neither are they denying or “walking back” the 80 percent statistic either. They are clearly endorsing the biochemical approach in ENCODE’s 2012 paper. That means they don’t think the evolutionary approach is going to give you the best answer. Instead, they prefer the biochemical method, which uncovered evidence of function for 80% of the genome. 

Critical of Evolutionary Estimates

Sixth, also noteworthy is that the Kellis paper is highly critical of evolutionary estimates of the fraction of the genome that is functional. Here’s how it describes the evolutionary view:

[T]he biochemically active regions cover a much larger fraction of the genome than do evolutionarily conserved regions, raising the question of whether nonconserved but biochemically active regions are truly functional. Many examples of elements that appear to have conflicting lines of functional evidence were described before the Encyclopedia of DNA Elements (ENCODE) Project, including elements with conserved phenotypes but lacking sequence-level conservation, conserved elements with no phenotype on deletion, and elements able to drive tissue-specific expression but lacking evolutionary conservation. … A high level of sequence conservation between related species is indicative of purifying selection, whereby disruptive mutations are rejected, with the corresponding sequence deemed to be likely functional.

We will address this objection head-on in a forthcoming Long Story Short video on junk DNA that’s been in production since long before Creation Myths posted its objections. The objection is fallacious because it assumes DNA can only be functional if it is “evolutionarily conserved.” But that view further assumes that evolutionary processes are the only way to produce function in the genome. If you can have function outside of “conserved” regions because evolution isn’t what generated the genome, then the argument falls apart. So this evolutionary argument effectively assumes the truth of evolution and boils down to a circular argument. 

Kellis et al. (2014) offers some additional arguments against the “evolutionary approach” to discerning function:  

has limitations. Identification of conserved regions depends on accurate multispecies sequence alignments, which remain a substantial challenge. Alignments are generally less effective for distal-acting regulatory regions, where they may be impeded by regulatory motif turnover, varying spacing constraints, and sequence composition biases (17, 49). Analyzing aligned regions for conservation can be similarly challenging. First, most transcription factor-binding sequences are short and highly degenerate, making them difficult to identify. Second, because detection of neutrally evolving elements requires sufficient phylogenetic distance, the approach is well suited for detecting mammalian- conserved elements, but it is less effective for primate-specific elements and essentially blind to human-specific elements. Third, certain types of functional elements such as immunity genes may be prone to rapid evolutionary turnover even among closely related species. More generally, alignment methods are not well suited to capture substitutions that preserve function, such as compensatory changes preserving RNA structure, affinity-preserving substitutions within regulatory motifs, or mutations whose effect is buffered by redundancy or epistatic effects. Thus, absence of conservation cannot be interpreted as evidence for the lack of function.”

That’s a potent critique. It says some sequences cannot be compared or aligned because they are TOO different, and those differences might in fact encode functions! The evolutionary approach might be missing some of the sequences that encode differences between species. And it notes that “absence of conservation” does not mean “lack of function.”  

So Kellis et al. (2014) gets the logic right: while conservation strongly implies function, the converse is not necessarily true: absence of conservation does not necessarily mean lack of function. In other words, they blew the “If it ain’t conserved you can’t say it’s functional” objection out of the water.

Seventh, there are additional noteworthy statements from the Kellis et al. (2014) showing they did not walk back the claim about 80 percent of the genome:

“[T]he noncoding regions of the human genome harbor a rich array of functionally significant elements with diverse gene regulatory and other functions.”
They still see the non-coding genome as having a “rich array of functionally significant elements.” This is NOT the traditional evolutionary view.

“[F]unction in biochemical and genetic contexts is highly particular to cell type and condition, whereas for evolutionary measures, function is ascertained independently of cellular state but is dependent on environment and evolutionary niche.”
This suggests that something can still be functional in a biochemical context, though in an evolutionary sense it can be hard to determine if it’s “functional.”

“The methods also differ widely in their false-positive and false-negative rates, the resolution with which elements are defined, and the throughput with which they can be surveyed.”
So genetics and molecular biology, on one hand, and evolutionary measures on the other have different rates of “false negatives” for function. This is a polite way of saying that the evolutionary approach often wrongly says things aren’t functional.

But the junk DNA advocate will say that the biochemical approach might lead to false positives of function. The paper acknowledges this: “[A]lthough biochemical signatures are valuable for identifying candidate regulatory elements in the biological context of the cell type examined, they cannot be interpreted as definitive proof of function on their own.”

That’s fair. We haven’t studied in detail every single time that the genome is transcribed to see what it’s doing, and there could be some “transcriptional noise” — the “junk RNA” view. We’ve also addressed this in the earlier Long Story video. Is this tantamount to “walking back” the 80% statistic? Not in the lease. 

Random Noise or “Reproducible Biochemical Activity”

To defeat the 80 percent statistic, junk DNA defenders need there to be a huge amount of random noise in transcription. It’s possible that some transcription is random noise. But if much or most or nearly all of this transcription is noise, then cells are wasting colossal resources, and that would be highly deleterious to an organism, and would likely be selected against. So we have good reason off the bat to doubt that this transcription is largely random. 

Indeed, the paper has an opinion on this and it prefers the view that transcription is non-random and functional. Kellis et al. (2014) note that even if there is some transcriptional noise, there’s far more going on in cells than we would expect if most of the genome were genetic junk:

Thus, unanswered questions related to biological noise, along with differences in the resolution, sensitivity, and activity level of the corresponding assays, help to explain divergent estimates of the portion of the human genome encoding functional elements. Nevertheless, they do not account for the entire gulf between constrained regions and biochemical activity. Our analysis revealed a vast portion of the genome that appears to be evolving neutrally according to our metrics, even though it shows reproducible biochemical activity, which we previously referred to as “biochemically active but selectively neutral” (68). It could be argued that some of these regions are unlikely to serve critical functions, especially those with lower-level biochemical signal. However, we also acknowledge substantial limitations in our current detection of constraint, given that some human-specific functions are essential but not conserved and that disease-relevant regions need not be selectively constrained to be functional. Despite these limitations, all three approaches are needed to complete the unfinished process of inferring functional DNA elements, specifying their boundaries, and defining what functions they serve at molecular, cellular, and organismal levels.

The key words there are “reproducible biochemical activity.” 

ENCODE’s results suggest that a cell’s type and functional role in an organism are critically influenced by complex and carefully orchestrated patterns of expression of RNAs inside that cell. As another Kellis et al. co-author, John Stamatoyannopoulos, observed in a 2012 paper, ENCODE found that “the majority of regulatory DNA regions are highly cell type-selective,” and “the genomic landscape rapidly becomes crowded with regulatory DNA as the number of cell types” studied increases. Or, as two other ENCODE researchers explain in a 2013 paper, “Assertions that the observed transcription represents random noise … is more opinion than fact and difficult to reconcile with the exquisite precision of differential cell-and tissue — specific transcription in human cells.”

Stamatoyannopoulos (2012) further finds that repetitive DNA (often called “transposable elements”), which comprises over 50 percent of our genome, is active only in specific cell types. This nonrandom transcription of repetitive DNA into RNA suggests that transposable elements have functions whose importance are on par with other gene regulation mechanisms. He writes:

In marked contrast to the prevailing wisdom, ENCODE chromatin and transcription studies now suggest that a large number of transposable elements encode highly cell type-selective regulatory DNA that controls not only their own cell-selective transcription, but also those of neighboring genes. Far from an evolutionary dustbin, transposable elements appear to be active and lively members of the genomic regulatory community, deserving of the same level of scrutiny applied to other genic or regulatory features.

The vast majority of our genome — including repetitive DNA — is transcribed into RNA in nonrandom, cell-type-specific ways. These non-random processes strongly point against transcription being noise and provides strong evidence of function.

Indeed, individual RNA molecules then form networks in a cell, interacting with DNA, proteins, and other RNAs to control which genes are turned on and off, and which genes are expressed as proteins, thereby playing a crucial role in determining the cell’s type. As Stamatoyannopoulos puts it, this complex system exudes function:

More of the human genome sequence appears to be used for some reproducible, biochemically defined activity than was previously imagined. Contrary to the initial expectations of many, the overwhelming majority of these activities appear to be state-specific — either restricted to specific cell types or lineages, or evocable in response to a stimulus. … [B]iochemical signatures of many ENCODE-defined elements exhibit complex trans-cellular patterns of activity. … Together, these observations suggest that the genome may, in fact, be extensively multiply encoded — i.e., that the same DNA element gives rise to different activities in different cell types.

These consistent and predictable cell-type-specific patterns of RNA expression, and stimulus-specific patterns of transcription, show that mass genomic transcription of DNA into RNA is not random, but has important functional purposes.

In other words, transcription isn’t random — it happens over and over again in predictable patterns — and there’s a lot more non-random transcription going on than what you would expect from an evolutionary view of “constrained regions” of the genome. We may not yet have definitive proof of what every genomic element that’s transcribed is doing, but this evidence tells you that ENCODE’s data shows real evidence of function. Organized, reproducible, predictable transcription across the vast majority of the genome is far more compatible with the mass-functionality view of the genome, and highly incompatible with the “transcriptional noise” view promoted by junk DNA defenders. And the Kellis et al. authors think this biochemical evidence of function is more important than evolutionary predictions.

Bottom line: As for the rest of what Creation Myths has to say, there’s really not much there. Our video backs up what it says with clear quotes and references. We’ve provided more documentation here. Creation Myths, in contrast, is inventing quotes. Whether you celebrate Easter or not, may you enjoy this weekend and spend it quoting people carefully and accurately — not inventing quotes to suit your own purposes.