Search This Blog

Friday 1 March 2024

We know it when we see it?

 Intuitive Specified Complexity: A User-Friendly Account


Even though this series is titled “Specified Complexity Made Simple,” there’s a limit to how much the concept of specified complexity may be simplified before it can no longer be adequately defined or explained. Accordingly, specified complexity, even when made simple, will still require the introduction of some basic mathematics, such as exponents and logarithms, as well as an informal discussion of information theory, especially Shannon and Kolmogorov information. I’ll get to that in the subsequent posts. 

At this early stage in the discussion, however, it seems wise to lay out specified complexity in a convenient non-technical way. That way, readers lacking mathematical and technical facility will still be able to grasp the gist of specified complexity. Here, I’ll present an intuitively accessible account of specified complexity. Just as all English speakers are familiar with the concept of prose even if they’ve never thought about how it differs from poetry, so too we are all familiar with specified complexity even if we haven’t carefully defined it or provided a precise formal mathematical account of it. 

In this post I’ll present a user-friendly account of specified complexity by means of intuitively compelling examples. Even though non-technical readers may be inclined to skip the rest of this series, I would nonetheless encourage all readers to dip into the subsequent posts, if only to persuade themselves that specified complexity has a sound rigorous basis to back up its underlying intuition. 

To Get the Ball Rolling…

Let’s consider an example by YouTube personality Dave Farina, known popularly as “Professor Dave.” In arguing against the use of small probability arguments to challenge Darwinian evolutionary theory, Farina offers the following example:

Let’s say 10 people are having a get-together, and they are curious as to what everyone’s birthday is. They go down the line. One person says June 13th, another says November 21st, and so forth. Each of them have a 1 in 365 chance of having that particular birthday. So, what is the probability that those 10 people in that room would have those 10 birthdays? Well, it’s 1 in 365 to the 10th power, or 1 in 4.2 times 10 to the 25, which is 42 trillion trillion. The odds are unthinkable, and yet there they are sitting in that room. So how can this be? Well, everyone has to have a birthday.

Farina’s use of the term “unthinkable” brings to mind Vizzini in The Princess Bride. Vizzini keeps uttering the word “inconceivable” in reaction to a man in black (Westley) steadily gaining ground on him and his henchmen. Finally, his fellow henchman Inigo Montoya remarks, “You keep using that word — I do not think it means what you think it means.”

Similarly, in contrast to Farina, an improbability of 1 in 42 trillion trillion is in fact quite thinkable. Right now you can do even better than this level of improbability. Get out a fair coin and toss it 100 times. That’ll take you a few minutes. You’ll witness an event unique in the history of coin tossing and one having a probability of 1 in 10 to the 30, or 1 in a million trillion trillion. 

The reason Farina’s improbability is quite thinkable is that the event to which it is tied is unspecified. As he puts it, “One person says June 13th, another says November 21st, and so forth.” The “and so forth” here is a giveaway that the event is unspecified. 

But now consider a variant of Farina’s example: Imagine that each of his ten people confirmed that his or her birthday was January 1. The probability would in this case again be 1 in 42 trillion trillion. But what’s different now is that the event is specified. How is it specified? It is specified in virtue of having a very short description, namely, “Everyone here was born New Year’s Day.” 

Nothing Surprising Here

The complexity in specified complexity refers to probability: the greater the complexity, the smaller the probability. There is a precise information-theoretic basis for this connection between probability and complexity that we’ll examine in the next post. Accordingly, because the joint probability of any ten birthdays is quite low, their complexity will be quite high. 

For things to get interesting with birthdays, complexity needs to be combined with specification. A specification is a salient pattern that we should not expect a highly complex event to match simply by chance. Clearly, a large group of people that all share the same birthday did not come together by chance. But what exactly is it that makes a pattern salient so that, in the presence of complexity, it becomes an instance of specified complexity and thereby defeats chance? 

That’s the whole point of specified complexity. Sheer complexity, as Farina’s example shows, cannot defeat chance. So too, the absence of complexity cannot defeat chance. For instance, if we learn that a single individual has a birthday on January 1, we wouldn’t regard anything as amiss or afoul. That event is simple, not complex, in the sense of probability. Leaving aside leap years and seasonal effects on birth rates, 1 out 365 people will on average have a birthday on January 1. With a worldwide population of 8 billion people, many people will have that birthday. 

Not by Chance

But a group of exactly 10 people all in the same room all having a birthday of January 1 is a different matter. We would not ascribe such a coincidence to chance. But why? Because the event is not just complex but also specified. And what makes a complex event also specified — or conforming to a specification — is that it has a short description. In fact, we define specifications as patterns with short descriptions.

Such a definition may seem counterintuitive, but it actually makes good sense of how we eliminate chance in practice. The fact is, any event (and by extension any object or structure produced by an event) is describable if we allow ourselves a long enough description. Any event, however improbable, can therefore be described. But most improbable events can’t be described simply. Improbable events with simple descriptions draw our attention and prod us to look for explanations other than chance.

Take Mount Rushmore. It could be described in detail as follows: for each cubic micrometer in a large cube that encloses the entire monument, register whether it contains rock or is empty of rock (treating partially filled cubic micrometers, let us stipulate, as empty). Mount Rushmore can be enclosed in a cube of under 50,000 cubic meters. Moreover, each cubic meter contains a million trillion micrometers. Accordingly, 50 billion trillion filled-or-empty cells could describe Mount Rushmore in detail. Thinking of each filled-or-empty cell as a bit then yields 50 billion trillion bits of information. That’s more information than contained in the entire World Wide Web (there are currently 2 billion websites globally). 

But of course, nobody attempts to describe Mount Rushmore that way. Instead, we describe it succinctly as “a giant rock formation that depicts the U.S. Presidents George Washington, Thomas Jefferson, Abraham Lincoln, and Theodore Roosevelt.” That’s a short description. At the same time, any rock formation the size of Mount Rushmore will be highly improbable or complex. Mount Rushmore is therefore both complex and specified. That’s why, even if we knew nothing about the history of Mount Rushmore’s construction, we would refuse to attribute it to the forces of chance (such as wind and erosion) and instead attribute it to design.

Take the Game of Poker

Consider a few more examples in this vein. There are 2,598,960 distinct possible poker hands, and so the probability of any poker hand is 1/2,598,960. Consider now two short descriptions, namely, “royal flush” and “single pair.” These descriptions have roughly the same description length. Yet there are only 4 ways of getting a royal flush and 1,098,240 ways of getting a single pair. This means the probability of getting a royal flush is 4/2,598,960 = .00000154 but the probability of getting a single pair is 1,098,240/2,598,960 = .423. A royal flush is therefore much more improbable than a single pair.

Suppose now that you are playing a game of poker and you come across these two hands, namely, a royal flush and a single pair. Which are you more apt to attribute to chance? Which are you more apt to attribute to cheating, and therefore to design? Clearly, a single pair would, by itself, not cause you to question chance. It is specified in virtue of its short description. But because it is highly probable, and therefore not complex, it would not count as an instance of specified complexity. 

Witnessing a royal flush, however, would elicit suspicion, if not an outright accusation of cheating (and therefore of design). Of course, given the sheer amount of poker played throughout the world, royal flushes will now and then appear by chance. But what raises suspicion that a given instance of a royal flush may not be the result of chance is its short description (a property it shares with “single pair”) combined with its complexity/improbability (a property it does not share with “single pair”). 

Let’s consider one further example, which seems to have become a favorite among readers of the recently released second edition of The Design Inference. In the chapter on specification, my co-author Winston Ewert and I consider a famous scene in the film The Empire Strikes Back, which we then contrast with a similar scene from another film that parodies it. Quoting from the chapter:

Darth Vader tells Luke Skywalker, “No, I am your father,” revealing himself to be Luke’s father. This is a short description of their relationship, and the relationship is surprising, at least in part because the relationship can be so briefly described. In contrast, consider the following line uttered by Dark Helmet to Lone Starr in Spaceballs, the Mel Brooks parody of Star Wars: “I am your father’s brother’s nephew’s cousin’s former room­mate.” The point of the joke is that the relationship is so compli­cated and contrived, and requires such a long description, that it evokes no suspicion and calls for no special explanation. With everybody on the planet connected by no more than “six degrees of separation,” some long description like this is bound to identify anyone.

In a universe of countless people, Darth Vader meeting Luke Skywalker is highly improbable or complex. Moreover, their relation of father to son, by being briefly described, is also specified. Their meeting therefore exhibits specified complexity and cannot be ascribed to chance. Dark Helmet meeting Lone Starr may likewise be highly improbable or complex. But given the convoluted description of their past relationship, their meeting represents an instance of unspecified complexity. If their meeting is due to design, it is for reasons other than their past relationship.

How Short Is Short Enough?

Before we move to a more formal treatment of specified complexity, we are well to ask how short is short enough for a description to count as a specification. How short should a description be so that combined with complexity it produces specified complexity? As it is, in the formal treatment of specified complexity, complexity and description length are both converted to bits, and then specified complexity can be defined as the difference of bits (the bits denoting complexity minus the bits denoting specification). 

When specified complexity is applied informally, however, we may calculate a probability (or associated complexity) but we usually don’t calculate a description length. Rather, as with the Star Wars/Spaceballs example, we make an intuitive judgment that one description is short and natural, the other long and contrived. Such intuitive judgments have, as we will see, a formal underpinning, but in practice we let ourselves be guided by intuitive specified complexity, treating it as a convincing way to distinguish merely improbable events from those that require further scrutiny.  

The other prisoner of conscience? III

 

James Tour wants to see a manager re:OOL Research

 Apparently there is a problem with his prebiotic soup.

There is information and then there is Information?

 Shannon and Kolmogorov Information


The first edition of my book The Design Inference as well as its sequel, No Free Lunch, set the stage for defining a precise information-theoretic measure of specified complexity — which is the subject of this series. There was, however, still more work to be done to clarify the concept. In both these books, specified complexity was treated as a combination of improbability or complexity on the one hand and specification on the other. 

As presented back then, it was an oil-and-vinegar combination, with complexity and specification treated as two different types of things exhibiting no clear commonality. Neither book therefore formulated specified complexity as a unified information measure. Still, the key ideas for such a measure were in those earlier books. Here, I review those key information-theoretic ideas. In the next section, I’ll join them into a unified whole.

Let’s Start with Complexity

As noted earlier, there’s a deep connection between probability and complexity. This connection is made clear in Shannon’s theory of information. In this theory, probabilities are converted to bits. To see how this works, consider tossing a coin 100 times, which yields an event of probability 1 in 2^100 (the caret symbol here denotes exponentiation). But that number also corresponds to 100 bits of information since it takes 100 bits to characterize any sequence of 100 coin tosses (think of 1 standing for heads and 0 for tails). 

In general, any probability p corresponds to –log(p) bits of information, where the logarithm here and elsewhere in this article is to the base 2 (as needed to convert probabilities to bits). Think of a logarithm as an exponent: it’s the exponent to which you need to raise the base (here always 2) in order to get the number to which the logarithmic function is applied. Thus, for instance, a probability of p = 1/10 corresponds to an information measure of –log(1/10) ≈ 3.322 bits (or equivalently, 2^(–3.322) ≈ 1/10). Such fractional bits allow for a precise correspondence between probability and information measures.

The complexity in specified complexity is therefore Shannon information. Claude Shannon (1916–2001, pictured above) introduced this idea of information in the 1940s to understand signal transmissions (mainly of bits, but also for other character sequences) across communication channels. The longer the sequence of bits transmitted, the greater the information and therefore its complexity. 

Because of noise along any communication channel, the greater the complexity of a signal, the greater the chance of its distortion and thus the greater the need for suitable coding and error correction in transmitting the signal. So the complexity of the bit string being transmitted became an important idea within Shannon’s theory. 

Shannon’s information measure is readily extended to any event E with a probability P(E). We then define the Shannon information of E as –log(P(E)) = I(E). Note that the minus sign is there to ensure that as the probability of E goes down, the information associated with E goes up. This is as it should be. Information is invariably associated with the narrowing of possibilities. The more those possibilities are narrowed, the more the probabilities associated with those probabilities decrease, but correspondingly the more the information associated with those narrowing possibilities increases. 

For instance, consider a sequence of ten tosses of a fair coin and consider two events, E and F. Let E denote the event where the first five of these ten tosses all land heads but where we don’t know the remaining tosses. Let F denote the event where all ten tosses land heads. Clearly, F narrows down the range of possibilities for these ten tosses more than E does. Because E is only based on the first five tosses, its probability is P(E) = 2^(–5) = 1/(2^5) = 1/32. On the other hand, because F is based on all ten tosses, its probability is P(F) = 2^(–10) = 1/(2^10) = 1/1,024. In this case, the Shannon information associated with E and F is respectively I(E) = 5 bits and I(F) = 10 bits. 

We Also Need Kolmogorov Complexity

Shannon information, however, is not enough to understand specified complexity. For that, we also need Kolmogorov information, or what is also called Kolmogorov complexity. Andrei Kolmogorov (1903–1987) was the greatest probabilist of the 20th century. In the 1960s he tried to make sense of what it means for a sequence of numbers to be random. To keep things simple, and without loss of generality, we’ll focus on sequences of bits (since any numbers or characters can be represented by combinations of bits). Note that we made the same simplifying assumption for Shannon information.

The problem Kolmogorov faced was that any sequence of bits treated as the result of tossing a fair coin was equally probable. For instance, any sequence of 100 coin tosses would have probability 1/(2^100), or 100 bits of Shannon information. And yet there seemed to Kolmogorov a vast difference between the following two sequences of 100 coin tosses (letting 0 denote tails and 1 denote heads):

0000000000000000000000000
0000000000000000000000000
0000000000000000000000000
0000000000000000000000000

and

1001101111101100100010011
0001010001010010101110001
0101100000101011000100110
1100110100011000000110001

The first just repeats the same coin toss 100 times. It appears anything but random. The second, on the other hand, exhibits no salient pattern and so appears random (I got it just now from an online random bit generator). But what do we mean by random here? Is it that the one sequence is the sort we should expect to see from coin tossing but the other isn’t? But in that case, probabilities tell us nothing about how to distinguish the two sequences because they both have the same small probability of occurring. 

Ideas in the Air

Kolmogorov’s brilliant stroke was to understand the randomness of these sequences not probabilistically but computationally. Interestingly, the ideas animating Kolmogorov were in the air at that time in the mid 1960s. Thus, both Ray Solomonoff and Gregory Chaitin (then only a teenager) also came up with the same idea. Perhaps unfairly, Kolmogorov gets the lion’s share of the credit for characterizing randomness computationally. Most information-theory books (see, for instance, Cover and Thomas’s The Elements of Information Theory), in discussing this approach to randomness, will therefore focus on Kolmogorov and put it under what is called Algorithmic Information Theory (AIT). 

Briefly, Kolmogorov’s approach to randomness is to say that a sequence of bits is random to the degree that it has no short computer program that generates it. Thus, with the first sequence above, it is non-random since it has a very short program that generates it, such as a program that simply says “repeat ‘0’ 100 times.” On the other hand, there is no short program (so far as we can tell) that generates the second sequence. 

It is a combinatorial fact (i.e., a fact about the mathematics of counting or enumerating possibilities) that the vast majority of bit sequences cannot be characterized by any program shorter than the sequence itself. Obviously, any sequence can be characterized by a program that simply incorporates the entire sequence and then simply regurgitates it. But such a program fails to compress the sequence. The non-random sequences, by having programs shorter than the sequences themselves, are thus those that are compressible. The first of the sequences above is compressible. The second, for all we know, isn’t.

Kolmogorov’s information (also known as Kolmogorov complexity) is a computational theory because it focuses on identifying the shortest program that generates a given bit-string. Yet there is an irony here: it is rarely possible to say with certainly that a given bit string is truly random in the sense of having no compressible program. From combinatorics, with its mathematical counting principles, we know that the vast majority of bit sequences must be random in Kolmogorov’s sense. That’s because the number of short programs is very limited and can only generate very few longer sequences. Most longer sequences will require longer programs. 

Our Common Experience

But if for an arbitrary bit sequence D we define K(D) as the length of the shortest program that generates D, it turns out that there is no computer program that calculates K(D). Simply put, the function K is non-computable. This fact from theoretical computer science matches up with our common experience that something may seem random for a time, and yet we can never be sure that it is random because we might discover a pattern clearly showing that the thing in fact isn’t random (think of an illusion that looks like a “random” inkblot only to reveal a human face on closer inspection). 

Yet even though K is non-computable, in practice it is a useful measure, especially for understanding non-randomness. Because of its non-computability, K doesn’t help us to identify particular non-compressible sequences, these being the random sequences. Even with K as a well-defined mathematical function, we can’t in most cases determine precise values for it. Nevertheless, K does help us with the compressible sequences, in which case we may be able to estimate it even if we can’t exactly calculate it. 

What typically happens in such cases is that we find a salient pattern in a sequence, which then enables us to show that it is compressible. To that end, we need a measure of the length of bit sequences as such. Thus, for any bit sequence D, we define |D| as its length (total number of bits). Because any sequence can be defined in terms of itself, |D| forms an upper bound on Kolmogorov complexity. Suppose now that through insight or ingenuity, we find a program that substantially compresses D. The length of that program, call it n, will then be considerably less than |D| — in other words, n < |D|. 

Although this program length n will be much shorter than D, it’s typically not possible to show that this program of length n is the very shortest program that generates D. But that’s okay. Given such a program of length n, we know that K(D) cannot be greater than n because K(D) measures the very shortest such program. Thus, by finding some short program of length n, we’ll know that K(D) ≤ n < |D|. In practice, it’s enough to come up with a short program of length n that’s substantially less than |D|. The number n will then form an upper bound for K(D). In practice, we use n as an estimate for K(D). Such an estimate, as we’ll see, ends up in applications being a conservative estimate of Kolmogorov complexity. 

1Corinthians Ch.4:7:The Watchtower society's condensed commentary.

 



Wol.JW.org


Friday, March 1

Why do you boast?​—1 Cor. 4:7.


The apostle Peter urged his brothers to use whatever gifts and talents they had to build up their fellow believers. Peter wrote: “To the extent that each one has received a gift, use it in ministering to one another as fine stewards of God’s undeserved kindness.” (1 Pet. 4:10) We should not hold back from using our gifts to the fullest for fear that others may become jealous or get discouraged. But we must be careful that we do not boast about them. (1 Cor. 4:6) Let us remember that any natural abilities we may have are gifts from God. We should use those gifts to build up the congregation, not to promote ourselves. (Phil. 2:3) When we use our energy and abilities to do God’s will, we will have cause for rejoicing​—not because we are outdoing others or proving ourselves superior to them, but because we are using our gifts to bring praise to JEHOVAH. 

The Origin of Life remains darwinism's achilles heel?

 On the Origin of Life, a Measure of Intelligent Design’s Impact on Mainstream Science


Don’t let anyone tell you that intelligent design isn’t having an impact on the way mainstream scientists are thinking about problems like the origin of life (OOL). David Coppedge points out the “devastating assessment” of OOL that was just published in Nature, the world’s most prestigious science journal. The authors are Nick Lane and Joana Xavier. The latter is a chemist at Imperial College London. As Coppedge notes, she’s been frank in comments about intelligent design and specifically Stephen Meyer’s Signature in the cell.

“One of the Best Books I’ve Read”

From a 2022 conversation with Perry Marshall:

But about intelligent design, let me tell you, Perry, I read Signature in the Cell by Stephen Meyer…And I must tell you, I found it one of the best books I’ve read, in terms of really putting the finger on the questions. What I didn’t like was the final answer, of course. But I actually tell everyone I can, “Listen, read that book. Let’s not put intelligent design on a spike and burn it. Let’s understand what they’re saying and engage.” It’s a really good book that really exposes a lot of the questions that people try to sweep under the carpet….I think we must have a more naturalistic answer to these processes. There must be. Otherwise, I’ll be out of a job.

That is a remarkable statement. Paul Nelson first Noted it at Evolution News. 

Under the Carpet

Dr. Xavier rejects ID, which is fair enough, but recommends an ID book by Dr. Meyer to “everyone I can” because “it really exposes a lot of the questions that people try to sweep under the carpet.” In the book, Meyer finds that, in addressing the origin-of-life puzzle, all current materialist solutions fail. He has a politer way of saying what chemist James Tour does on the same subject.

So that’s September 2022. Now a year and a half later, Xavier is back in the pages of Nature exposing weaknesses in the OOL field as currently constituted. She still holds out for a “more naturalistic answer.” But do you think, in writing about those “questions that people try to sweep under the carpet,” she didn’t have Meyer’s book in the back of her mind? I’m no mind reader, but to me, the question seems self-answering.


Getting fraud down to a science?IV

 

Tuesday 27 February 2024

The odd couple? II

 Can Evolution and Intelligent Design Work Together in Harmony?


Or is that wishful thinking? On a new episode of ID the Future, host Casey Luskin concludes his conversation with philosopher Stephen Dilley about a recent proposal to marry mainstream evolutionary theory with a case for intelligent design. Dr. Dilley is lead author of a comprehensive critique of Kojonen’s model co-authored with Luskin, Brian Miller, and Emily Reeves and published in the journal Religions.

In the second half of their discussion, Luskin and Dilley explain key scientific problems with Kojonen’s theistic evolutionary model. First up is Kojonen’s acceptance of both convergent evolution and common ancestry, two methods used by evolutionary biologists to explain common design features among different organisms. But if the design can be explained through natural processes, there is little need to invoke intelligent design. After all, the whole point of mainstream evolutionary theory is to render any need for design superfluous.

Dr. Dilley also explains why Kojonen’s model contradicts our natural intuition to detect design. If we look at a hummingbird under Kojonen’s proposal, we are still required to see unguided natural processes at work, the appearance of design without actual intelligent design. Yet we are also supposed to acknowledge that an intelligent designer front-loaded the evolutionary process with the creative power it needs to produce the hummingbird. So is it intelligently designed or isn’t it? The theist on the street is left scratching his or her head.

Download the podcast or listen to it here

Monday 26 February 2024

On the Syriac Peshitta.

 The Syriac Peshitta—A Window on the World of Early Bible Translations


For nine days in 1892, the twin sisters Agnes Smith Lewis and Margaret Dunlop Gibson journeyed by camel through the desert to St. Catherine’s Monastery at the foot of Mount Sinai. Why would these two women in their late 40’s undertake such a journey at a time when travel in what was called the Orient was so dangerous? The answer may help strengthen your belief in the accuracy of the Bible.

JUST before returning to heaven, Jesus commissioned his disciples to bear witness about him “in Jerusalem, in all Judea and Samaria, and to the most distant part of the earth.” (Acts 1:8) This the disciples did with zeal and courage. Their ministry in Jerusalem, however, soon stirred up strong opposition, resulting in the martyrdom of Stephen. Many of Jesus’ disciples found refuge in Antioch, Syria, one of the largest cities in the Roman Empire, some 350 miles (550 km) north of Jerusalem.—Acts 11:19.

In Antioch, the disciples continued to preach “the good news” about Jesus, and many non-Jews became believers. (Acts 11:20, 21) Though Greek was the common language within the walls of Antioch, outside its gates and in the province, the language of the people was Syriac.

THE GOOD NEWS TRANSLATED INTO SYRIAC

As the number of Syriac-speaking Christians increased in the second century, there arose a need for the good news to be translated into their tongue. Thus, it appears that Syriac, not Latin, was the first vernacular into which parts of the Christian Greek Scriptures were translated.

 By about 170 C.E., the Syrian writer Tatian (c. 120-173 C.E.) combined the four canonical Gospels and produced, in Greek or Syriac, the work commonly called the Diatessaron, a Greek word meaning “through [the] four [Gospels].” Later, Ephraem the Syrian (c. 310-373 C.E.) produced a commentary on the Diatessaron, thus confirming that it was in general use among Syrian Christians.

The Diatessaron is of great interest to us today. Why? In the 19th century, some scholars argued that the Gospels were written as late as the second century, between 130 C.E. and 170 C.E., and thus could not be authentic accounts of Jesus’ life. However, ancient manuscripts of the Diatessaron that have come to light since then have proved that the Gospels of Matthew, Mark, Luke, and John were already in wide circulation by the middle of the second century. They must therefore have been written earlier. In addition, since Tatian, when compiling the Diatessaron, did not make use of any of the so-called apocryphal gospels in the way he did the four accepted Gospels, it is evident that the apocryphal gospels were not viewed as reliable or canonical.

By the start of the fifth century, a translation of the Bible into Syriac came into general use in northern Mesopotamia. Likely made during the second or third century C.E., this translation included all the books of the Bible except 2 Peter, 2 and 3 John, Jude, and Revelation. It is known as the Peshitta, meaning “Simple” or “Clear.” The Peshitta is one of the oldest and most important witnesses to the early transmission of the Bible text.

Interestingly, one manuscript of the Peshitta has a written date corresponding to 459/460 C.E., making it the oldest Bible manuscript with a definite date. In about 508 C.E., a revision of the Peshitta was made that included the five missing books. It came to be known as the Philoxenian Version.


Syriac Peshitta of the Pentateuch, 464 C.E., the second-oldest dated manuscript of Bible text

Until the 19th century, almost all the known Greek copies of the Christian Greek Scriptures were from the fifth century or much later. For this reason, Bible scholars were especially interested in such early versions as the Latin Vulgate and the Syriac Peshitta. At the time, some believed that the Peshitta was the result of a revision of an older Syriac version. But no such text was known. Since the roots of the Syriac Bible go back to the second century, such a version would provide a window on the Bible text at an early stage, and it would surely be invaluable to Bible scholars! Was there really an old Syriac version? Would it be found?


The palimpsest called the Sinaitic Syriac. Visible in the margin is the underwriting of the Gospels

Yes, indeed! In fact, two such precious Syriac manuscripts were found. The first is a manuscript dating from the fifth century. It was among a large number of Syriac manuscripts acquired by the British Museum in 1842 from a monastery in the Nitrian Desert in Egypt. It was called the Curetonian Syriac because it was discovered and published by William Cureton, the museum’s assistant keeper of manuscripts. This precious document contains the four Gospels in the order of Matthew, Mark, John, and Luke.

The second manuscript that has survived to our day is the Sinaitic Syriac. Its discovery is linked with the adventurous twin sisters mentioned at the start of this article. Although Agnes did not have a university degree, she learned eight foreign languages, one of them Syriac. In 1892, Agnes made a remarkable discovery in the monastery of St. Catherine in Egypt.

 There, in a dark closet, she found a Syriac manuscript. According to her own account, “it had a forbidding look, for it was very dirty, and its leaves were nearly all stuck together through their having remained unturned” for centuries. It was a palimpsest * manuscript of which the original text had been erased and the pages rewritten with a Syriac text about female saints. However, Agnes spotted some of the writing underneath and the words “of Matthew,” “of Mark,” or “of Luke” at the top. What she had in her hands was an almost complete Syriac codex of the four Gospels! Scholars now believe that this codex was written in the late fourth century.

The Sinaitic Syriac is considered one of the most important Biblical manuscripts discovered, right along with such Greek manuscripts as the Codex Sinaiticus and the Codex Vaticanus. It is now generally believed that both the Curetonian and Sinaitic manuscripts are extant copies of the old Syriac Gospels dating from the late second or early third century.

“THE WORD OF OUR GOD ENDURES FOREVER”

Can these manuscripts be useful to Bible students today? Undoubtedly! Take as an example the so-called long conclusion of the Gospel of Mark, which in some Bibles follows Mark 16:8. It appears in the Greek Codex Alexandrinus of the fifth century, the Latin Vulgate, and elsewhere. However, the two authoritative fourth-century Greek manuscripts—Codex Sinaiticus and Codex Vaticanus—both end with Mark 16:8. The Sinaitic Syriac does not have this long conclusion either, adding further evidence that the long conclusion is a later addition and was not originally part of Mark’s Gospel.

Consider another example. In the 19th century, almost all Bible translations had a spurious Trinitarian addition at 1 John 5:7. However, this addition does not appear in the oldest Greek manuscripts. Neither does it appear in the Peshitta, thus proving that the addition at 1 John 5:7 is indeed a corruption of the Bible text.

Clearly, as promised, Jehovah God has preserved his Holy Word. In it we are given this assurance: “The green grass dries up, the blossom withers, but the word of our God endures forever.” (Isaiah 40:8; 1 Peter 1:25) The version known as the Peshitta plays a humble but important role in the accurate transmission of the Bible’s message to all of humanity.

The big questions remain as big as ever?

 A New Look at Three Deep Questions


Ron Coody’s new book, Almost? Persuaded! Why Three Great Questions Resist Certainty, delivers a wide-ranging discussion and analysis of questions, answers, and arguments keenly relevant to the intelligent design community. His background is far from one-dimensional and he has long been engaging people over issues of worldview, evidence, and belief.

With a bachelor’s degree in microbiology and a Master of Divinity followed by a PhD in missiology, Coody is well qualified to address the cutting edges of science, philosophy, and theology. Enhancing his perception of diverse ways of thinking about these questions is his decades-long experience of living and working cross-culturally.

Questions of Consequence

The primary questions addressed here are obviously of deep consequence: Does God exist? Where did life come from? and Is free will real? A refreshing aspect of Almost? Persuaded! is its objective coverage of the broad range of arguments surrounding these questions. 

As I read Almost? Persuaded!, although I have been studying these questions for many years, I found that Coody’s presentation easily held my attention. Moreover, the breadth of his analysis provides new insights and expanded my understanding of developments in history and philosophy.

A Helpful Compendium

On the first question, “Does God Exist?”, Coody’s analytical summary of key philosophers and intellectuals, from Plato to Aquinas to Dawkins, caught my attention. His highlighting of key ideas from over twenty influential thinkers makes for a helpful compendium.

A familiar-sounding argument for design is Coody’s summary of number five of Thomas Aquinas’ Five Ways argument from the 13th century:

Working backwards from human experience of designing and building, Aquinas reasoned that the ordered universe and the creatures inhabiting it exhibit properties of design. Design requires a designer….Aquinas thought that the universe needed an intelligent mind to bring it into order. He believed that physical laws lacked the power to organize complex, functioning systems. 

P. 34

Another unique and somewhat amusing contribution is the author’s contrasting of Richard Dawkins with the Apostle Paul on the evidential weight of nature.

As Coody reviews the standard evidence for the fine-tuning of the physical parameters of the universe to allow life to exist, his presentation is accurate and compelling. The Big Bang, Lawrence Krauss’s attempts to redefine the “nothingness” out which the universe arose, Stephen Hawking’s blithe dismissal of the significance of the beginning with an invocation of gravity, and the counterpoint from Borde, Guth, and Vilenkin’s singularity theorem, are knit together in readable prose.

Encouragement for Curiosity

When it comes to the possibility of life forming itself naturally, again Coody gives an informative and insightful overview. Although, like the rest of us, he has his own convictions, he is willing to acknowledge the tension surrounding differing conclusions among those seeking to evaluate the evidence. He encourages the reader to persist in seeking answers: “Honest people of any faith or no faith should be interested in the truth. ” (p. 164)

The final section provides an enlightening discussion of free will. Coody captures the major issues: “Is free will an illusion created by the brain? In reality do we have any more free will than our computer?….Is the mind the same as the brain or is the mind something spiritual?” (p. 180)

Delving into the implications of materialistic determinism, and even quantum uncertainty, Coody provides a fresh look at the subject. In an illustration that is beguilingly simple, he borrows from the classic fairy tale of Pinocchio. His summary cuts deeply into one of the major shortcomings of materialist thinking: “On their view of the world, there was never any difference between the wooden Pinocchio and the human Pinocchio. Both were simply animated, soulless, material objects.” (p. 191)

Readers of almost any background will find much here that informs, provokes deeper reflection, and provides refreshing and novel illustrations relevant to the discussion of some of life’s most enduring questions

There is nothing simple about this beginning?

 Getting It Together: Tethers, Handshakes, and Multitaskers in the Cell


Running a cell requires coordination. How do molecules moving in the dark interior of a cell know how and when to connect? Protein tethers offer new clues, according to research at Philipps University in Marburg, Germany.

The ways that organelles and proteins connect at the right place and time are coming to light. One method is to encapsulate interacting molecules within compartments called condensates, droplets, and speckles. Like offices or cubicles where employees can talk without excess noise, these temporary spaces allow molecules to interact in peace (see “Caltech Finds Amazing Role for Noncoding DNA”). 

Another method for coordination of moving parts involves tethers. Certain molecular machines use “two hands” to bring other molecules or organelles together. Visualize a person taking a stranger’s hand and using her other hand to grasp a doorknob, leading the stranger to the place he needs to be. Many protein machines have a critical binding site for their targets, but these “dual affinity” tethering machines contain two different recognition sites on different domains that recognize separate targets needing to come together. Such multitasking machines are marvelously designed to promote fellowship for effective interactions in the cellular city.

A similar phenomenon has long been known in DNA translation. A set of molecules called aminoacyl-tRNA synthetases brings dissimilar molecules together. One synthetase feels the anticodon on its matching transfer RNA (tRNA) and then puts the corresponding amino acid on the opposite end. Like a language translator, each synthetase needs to know two languages — the DNA code and the protein code — to equip the tRNA with the correct amino acid. As the activated tRNA enters the ribosome, its anticodon base pairs with the complementary codon on the messenger RNA at one end, and its amino acid fits onto the growing polypeptide chain on the other end. This is a spectacular example of double duty, multitasking know-how. But is it the only one?

Another Example of Double Duty

A team of 15 researchers publishing in PLOS Biology under lead author Elena Bittner, also from Philipps University, and colleagues at Berkeley and Howard Hughes, has just reported a case of a multitasking machine that bridges dissimilar targets — in this case, peroxisomes with mitochondria or the endoplasmic reticulum (ER). It may not be the only case of “Proteins that carry dual targeting signals [that] can act as tethers between” organelles, they say:

Peroxisomes are organelles with crucial functions in oxidative metabolism. To correctly target to peroxisomes, proteins require specialized targeting signals. A mystery in the field is the sorting of proteins that carry a targeting signal for peroxisomes and as well as for other organelles, such as mitochondria or the endoplasmic reticulum (ER). Exploring several of these proteins in fungal model systems, we observed that they can act as tethers bridging organelles together to create contact sites. 

Take note that they found this in yeast, the simplest of eukaryotes.

We show that in Saccharomyces cerevisiae this mode of tethering involves the peroxisome import machinery, the ER–mitochondria encounter structure (ERMES) at mitochondria and the guided entry of tail-anchored proteins (GET) pathway at the ER. 

Why is this significant? 

Our findings introduce a previously unexplored concept of how dual affinity proteins can regulate organelle attachment and communication.

Previously unexplored: this sounds like a game changer. How does this “tethering” system work? After all the biochemistry work by the team is shown, demonstrating the dual-targeting capability, they illustrated it with a simplified diagram in Figure 10 in their open-access paper. As usual, even in simplified form, the system involves numerous other factors. The upshot is described as follows:

We have found that distinct proteins with targeting signals for 2 organelles can affect proximity of these organelles. This conclusion is supported by the notion that different types of dual affinity proteins can act as contact-inducing proteins (Fig 10) … Although dual affinity proteins are a challenge for maintaining organelle identity, they are ideally suited to support organelle interactions by binding to targeting factors and membrane-bound translocation machinery of different organelles. Dually targeted proteins appear to concentrate in regions of organelle contact, which may coincide with regions of reduced identity.

Within the mitochondria, we already met TIM and TOM, the channel guards who check the credentials of proteins entering the organelle’s outer and inner membranes. (The authors note that these translocase proteins are “evolutionarily conserved.”) But outside the mitochondrion, proteins needing to enter or exit have to find their way to the guards. That’s where the “dual affinity proteins” operate. 

What Do the Tethers Look Like?

Ptc5 is one of these tethering proteins, one of many that “contain targeting signals for mitochondria and peroxisomes at opposite termini.” Its Peroxisome Targeting Signal (PTS) recognizes the peroxisome at one end, and its Mitochondrial Targeting Signal (MTS) recognizes TOM at the mitochondrial channel. Experimenting with mutant strains of this and associated proteins and chaperones, the researchers confirmed that Ptc5 does tether peroxisomes to mitochondria. Moreover, its activity is dependent on need. “In aggregate,” they write, these data show that tethering via dual affinity proteins is a regulated process and depends on the metabolic state of the cell.” This implies the additional capability of sensing the fluctuating metabolic need.

The authors didn’t have much to say about evolution. As usual, it involved copious amounts of speculation.

While many peroxisomal membrane proteins can target peroxisomes without transitioning through the ER, several peroxisomal membrane proteins have evolved to be synthesized in vicinity to the ER and may translocate from it.

Other than TOM and TIM being “evolutionarily conserved,” that was all they had to offer Darwin.

A New Class of Activity Coordinators

What Bittner et al. have identified is probably the trigger for a paradigm shift concerning methods that cells use to get components together.

We conclude that dually targeted cargo includes a diverse and unexpected group of tethers, which are likely to maintain contact as long as they remain accessible for targeting factors at partner organelles. Coupling of protein and membrane trafficking is a common principle in the secretory pathway and it might also occur for peroxisomes at different contact sites.

And so, what lies ahead? Design proponents in biochemistry and molecular biology, play tetherball! Here is a potentially fruitful area for new discoveries.

How dually targeted proteins and their rerouting affect the flux of molecules other than proteins, e.g., membrane lipids remains a topic for future research. 




Sunday 25 February 2024

Getting fraud down to a science? III

 

Mind is absolutely over matter?

 

The odd couple?

 Can Evolution and Intelligent Design Be Happily Wedded?


On a new episode of ID the Future, host Casey Luskin kicks off a series of interviews responding to theologian Dr. Rope Kojonen’s proposal that front-loaded intelligent design and a full-blooded evolutionary process worked together in harmony to produce the diversity of life we find on Earth. Here, Dr. Luskin interviews Dr. Stephen Dilley, lead author of a comprehensive critique of Kojonen’s model, co-authored with Luskin, Brian Miller, and Emily Reeves and published in the journal Religions.

In the first half of a conversation, Luskin and Dilley describe Dr. Kojonen’s proposal in a nutshell, providing the philosophical framing needed to grasp Kojonen’s elegant but flawed argument. Kojonen’s idea is the ultimate front-loaded design model, allowing for evolutionary mechanisms to work themselves out, but within a careful and purposeful arrangement of finely tuned preconditions and laws of form. Seemingly, t’s the best of both worlds: empirically detectable design within a fully natural evolutionary process. 

But there’s a problem. The fine-tuning Kojonen claims is baked into evolutionary processes is actually not there. The sequence space for amino acids to come together to form functional proteins has been found to be exceedingly rare as well as isolated. We don’t find evidence of fine-tuning within the mutation/selection mechanism. Instead, we find a process limited in its creative power that cannot have produced the complexity and information-rich innovation necessary to bring about life’s biological diversity. As Luskin puts it, “He [Kojonen] is arguing that God had to stack the deck in favor of evolution in order to get it to work.” It’s an interesting thesis, and Kojonen is serious and scholarly in his approach to the problem. But in the end, it fails on scientific grounds.

Download the podcast or listen to it here.

Yet more confirming of the humanity of ancient humans.

 Burials Reveal Prehistoric Cultures Valued Children with Down Syndrome


We’ve all probably heard from one pundit or another that prehistoric humans discarded children with disabilities, just as animals might. Well, recently, researchers screened the DNA of 10,000 ancient humans (historic and prehistoric) for evidence of genetically detectible syndromes like Down sydrome. According to their report in Nature, “We find clear genetic evidence for six cases of trisomy 21 (Down syndrome) and one case of trisomy 18 (Edwards syndrome), and all cases are present in infant or perinatal burials.”

Clearly, people with significant genetic disorders could not expect a long life back then. But the researchers were surprised by the respect shown to the deceased children: “Notably, the care with which the burials were conducted, and the items found with these individuals indicate that ancient societies likely acknowledged these individuals with trisomy 18 and 21 as members of their communities, from the perspective of burial practice.”

The five prehistoric burials were all located within settlements and in some cases accompanied by special items such as colored bead necklaces, bronze rings or sea-shells. “These burials seem to show us that these individuals were cared for and appreciated as part of their ancient societies,” says [Adam] Rohrlach, the lead author of the study.

MAX PLANCK SOCIETY, “ANCIENT GENOMES REVEAL DOWN SYNDROME IN PAST SOCIETIES,” PHYS.ORG, FEBRUARY 20, 2024 > THE PAPER IS OPEN ACCESS VIA A SHAREIT TOKEN

Down syndrome (an extra whole or partial copy of the 21st chromosome, hence trisomy 21 ) is comparatively common (1/1,000 births). Edwards syndrome — three copies of chromosome 18 — occurs in 1/3,000 births.

Five of these burials of children with Down syndrome date to between 5,000 and 2,500 years before the present, in settled communities. An interesting feature is that the infants were buried inside houses:

“At the moment, we cannot say why we find so many cases at these sites,” says Roberto Risch, an archaeologist of the Universitat Autònoma de Barcelona working on intramural funerary rites, “but we know that they belonged to the few children who received the privilege to be buried inside the houses after death. This already is a hint that they were perceived as special babies.”

MAX PLANCK SOCIETY, “IN PAST SOCIETIES“

“A Surprise to Us”

In an article at The Conversation, researchers Adam “Ben” Rohrlach and Kay Prüfer comment,

The fact that three cases of Down syndrome and the one case of Edwards syndrome were found in just two contemporaneous and nearby settlements was a surprise to us.

“We don’t know why this happened,” says our co-author Roberto Risch, an archaeologist from The Autonomous University of Barcelona. “But it appears as if these people were purposefully choosing these infants for special burials.” “

ANCIENT DNA REVEALS CHILDREN WITH DOWN SYNDROME IN PAST SOCIETIES. WHAT CAN THEIR BURIALS TELL US ABOUT THEIR LIVES?,” FEBRUARY 20, 2024

Generally, when people are buried inside a home (floor burials), they are thought to be good, not bad, in some way. The sixth such burial was in a church graveyard in Finland, dated to the 17th–18th century

Why Were the Researchers So Surprised?

The researchers may be startled that the children were treated as members of the community because today considerable effort is made to identify children with Down syndrome prenatally — and most of them are aborted.

But perhaps Wayne Gretzky (in hockey, the legendary Great One) would be less surprised. In 1981, he met and developed a friendship with teenage Joey Moss (1963–2020) who had Down syndrome. In 1984, he got him a job as a locker room attendant with the Edmonton Oilers. Moss took to League life very well. An ardent fan and great favorite, he was inducted into the Alberta Sports Hall of Fame in 2003. He also received the National Hockey League Alumni Association’s Seventh Man award that year, for those “whose behind-the-scenes efforts make a difference in the lives of others.

A YouTube commenter writes, “I still tear up when I think of what we lost in Joey. He totally changed the way I deal with handicapped people. Clearly, his name must be in the rafters.”

Gretzky told People Magazine in 2016, “The people of Edmonton have accepted Joey as an everyday person without any sort of handicap and that’s what’s really special about his story.” Meanwhile, Gretzky himself raised money through golf tournaments to build more group homes for people who live with Down syndrome as adults — something that, of course, didn’t happen much in remote antiquity when almost all life expectancies were short.

If we don’t give people like Joey a chance, perhaps we haven’t advanced beyond our ancestors as much as we think, apart from our better living conditions.




Saturday 24 February 2024

The king of titans holds court.

 

Getting fraud down to a Science?II

 Data Can Appear in Science Journals — Out of Thin Air


Recently, Retraction Watch, a site that helps keeps science honest, noted some statistical peculiarities about a paper last September in the Journal of Clean Energy, “Green innovations and patents in OECD countries.” The site was tipped off by a PhD student in economics that “For several countries, observations for some of the variables the study tracked were completely absent.”

But That Wasn’t the Big Surprise

The big surprise was when the student wrote to one of the authors:

In email correspondence seen by Retraction Watch and a follow-up Zoom call, [Almas] Heshmati told the student he had used Excel’s autofill function to mend the data. He had marked anywhere from two to four observations before or after the missing values and dragged the selected cells down or up, depending on the case. The program then filled in the blanks. If the new numbers turned negative, Heshmati replaced them with the last positive value Excel had spit out. “No data? No problem!” …

But it got worse. Heshmati’s data, which the student convinced him to share, showed that in several instances where there were no observations to use for the autofill operation, the professor had taken the values from an adjacent country in the spreadsheet. New Zealand’s data had been copied from the Netherlands, for example, and the United States’ data from the United Kingdom.

 UNDISCLOSED TINKERING IN EXCEL BEHIND ECONOMICS PAPER,” RETRACTION WATCH, FEBRUARY 5, 2024

“It’s Pretty Egregious”

While many researchers decried the results, University of Copenhagen econometrician Søren Johansen said something worth pondering: “The reason it’s cheating isn’t that he’s done it, but that he hasn’t written it down,” adding, “It’s pretty egregious.”

Pomona College business prof Gary Smith weighed in at Retraction Watch, explaining how blanks can come to seem like information in statistical papers.

Imputation (the technique the authors were using), he says, is not always unfair: “If we are measuring the population of an area and are missing data for 2011, it is reasonable to fit a trend line and, unless there has been substantial immigration or emigration, use the predicted value for 2011. Using stock returns for 2010 and 2012 to impute a stock return for 2011 is not reasonable.” In other words, whether imputation is unfair depends on whether anything was likely to have happened in the period for which data is missing that would change the results. 

Another Story

But, he says, the way the authors of the controversial paper were using the technique was another story:

The most extreme cases are where a country has no data for a given variable. The authors’ solution was to copy and paste data for another country. Iceland has no MKTcap data, so all 29 years of data for Japan were pasted into the Iceland cells. Similarly, the ENVpol (environmental policy stringency) data for Greece (with six years imputed) were pasted into Iceland’s cells and the ENVpol data for Netherlands (with 2013-2018 imputed) were pasted into New Zealand’s cells. The WASTE (municipal waste per capita) data for Belgium (with 1991-1994 and 2018 imputed) were pasted into Canada. The United Kingdom’s R&Dpers (R&D personnel) data were pasted into the United States (though the 10.417 entry for the United Kingdom in 1990 was inexplicably changed to 9.900 for the United States).

The copy-and-pasted countries were usually adjacent in the alphabetical list (Belgium and Canada, Greece and Iceland, Netherlands and New Zealand, United Kingdom and United States), but there is no reason an alphabetical sorting gives the most reasonable candidates for copying and pasting. Even more troubling is the pasting of Japan’s MKTcap data into Iceland and the simultaneous pasting of Greece’s ENVpol data into Iceland. Iceland and Japan are not adjacent alphabetically, suggesting this match was chosen to bolster the desired results. 

GARY SMITH, “HOW (NOT) TO DEAL WITH MISSING DATA: AN ECONOMIST’S TAKE ON A CONTROVERSIAL STUDY, RETRACTION WATCH, FEBRUARY 21, 2024

He concludes, “There is no justification for a paper not stating that some data were imputed and describing how the imputation was done.”

What Counts as Science

Perhaps Elsevier, the journal publishers, agree with his view. Retraction Watch announced that Elsevier, the journal’s publisher, would retract the paper:

As we reported earlier this month, Almas Heshmati of Jönköping University mended a dataset full of gaps by liberally applying Excel’s autofill function and copying data between countries – operations other experts described as “horrendous” and “beyond concern.” …

Elsevier, in whose Journal of Cleaner Production the study appeared, moved quickly on the new information. A spokesperson for the publisher told us yesterday: “We have investigated the paper and can confirm that it will be retracted.” 

“EXCLUSIVE: ELSEVIER TO RETRACT PAPER BY ECONOMIST WHO FAILED TO DISCLOSE DATA TINKERING,” RETRACTION WATCH, FEBRUARY 22, 2024

If Elsevier doesn’t end up retracting the paper, that will certainly say something about what counts as science today.

Note: As noted above, the first author of the paper, Almas Heshmati, was the one originally interviewed by the student. The second author, Mike Tsionas, died recently.

"Settled Science" vs. Actual science.

Stifling Opposition Is the Real “Anti-Science”


The advancement of science is one of mankind’s greatest triumphs. And who could be against it? Deploying the raw power of rational analysis, science exponentially increases our understanding of the natural world and leads to wonderous applications to improve the human condition.

But these days, science has become something of a divisive concept. It’s not that most people reject the scientific method or science’s many achievements. Rather, because some in the scientific establishment co-opt the term “science” as a means of exerting control over policy or to further favored ideological agendas, trust in the scientific sector is deflating.

You know the types. They can be seen regularly on cable TV claiming righteously that “the science is settled” about the rightness of their opinions — for example, the medical propriety of “affirming” gender confusion in children with puberty blockers. Then, they deploy the pejorative “anti-science” against those who disagree to stifle other perspectives.

The Antithesis of Science

But shutting critics up is the antithesis of science, properly understood. Indeed, stifling opposition is the real “anti-science” because it betrays the fundamental precepts of the scientific method, an approach to learning that requires continual argumentation, (sometimes bitter) disagreements, and the never-ending willingness to challenge accepted orthodoxies. In this sense, “the science” is never “settled” but always open to revised understandings. Otherwise, science mutates into dogma, which suppresses the pursuit of knowledge. Indeed, sometimes that is the point.

Examples of once-unquestioned “truths” overturned by subsequent discoveries are legion. Here’s a recent example. Biologists used to believe that the human appendix was a useless vestigial organ. But because science is dynamic, this once uncontroversial perspective was challenged. And what do you know? “Science” has now discovered at least two valuable purposes for the appendix: it supports the body’s immune system and serves as a “bank” of sorts for storing beneficial gut bacteria.

Now, imagine if the scientists who worked to attain a better understanding of the appendix had been prevented from exploring that subject because the “scientific consensus” had determined previously that the organ had no beneficial purpose. What if the self-appointed guardians of perceived medical wisdom had dissuaded researchers from pursuing their investigations for fear of losing university tenure, being scorned by colleagues, or having research funding blocked? Valuable knowledge would have been lost. New medical approaches for treating an infected appendix would never be developed. The mistaken scientific understanding would have remained, yes, “settled.”

The Costs of “Settled Science”

Alas, these days the science establishment too often engages in just such censorship when it involves controversial scientific issues. We saw that on full display during the COVID-19 pandemic. When three noted epidemiologists (pictured above) questioned the wisdom of societal shutdowns and keeping children out of school, in the Great Barrington Declaration (GBD), rather than engage its content — as would have been the proper scientific approach — the public health establishment instead attempted to destroy the messengers. For example, then-National Institutes of Health director Francis Collins slandered the authors as “fringe,” and Anthony Fauci worked to undermine the GBD in the media. One of the authors, Stanford University professor Dr. Jay Bhattacharya, even found himself scorned by his own academic community for contesting the “settled science.”

Funny that. In the end, the GBD proved to have the better argument, illustrating the terrible harm that can be caused by stifling the scientific method and suppressing dissenting views.

Or consider the hot-button topic of evolution. For decades public spokespersons for the scientific establishment have insisted that the contemporary theory of evolution is unchallengeable. Oxford evolutionary biologist Richard Dawkins even went so far as to claim that “if you meet somebody who claims not to believe in evolution, that person is ignorant, stupid or insane (or wicked, but I’d rather not consider that).” Talk about chilling open scientific inquiry!

Yet, in 2016, a group of leading evolutionary and cell biologists convened a conference at the Royal Society in London. Many scientists who attended openly called for a new theory of evolution because of their increasing doubts about the supposed creative power of Darwin’s mechanism of natural selection. Are all these scientists “ignorant, stupid or insane”? Of course not. They are simply “doing science.”

The same vituperative anti-science approach to stifling critics was pursued by the scientific establishment during the embryonic stem cell debate between 2001 and 2008. After President George W. Bush funded embryonic stem cell research but also placed modest federal funding limitations on the experiments, he and supporters of his policy were accused of imposing their religious beliefs against “the gold standard” of regenerative medicine that could soon allow disabled people to throw away their wheelchairs. Scientific arguments that adult stem cells offered the better hope of developing treatments for a wide array of medical conditions were similarly attacked.

The Proof Is in the Pudding

More than twenty years later, what do we see? Embryonic stem cell research was mostly hype. There is not one FDA-approved treatment using embryonic stem cells. Meanwhile, adult stem cells are used to treat a wide array of pathologies. In other words, despite all the name-calling and screeching about interference with the scientific consensus, the heterodox theorists were right.

That isn’t always true, of course. Established views frequently prove correct when challenged. But that isn’t the point. What matters is that for science to be “science,” perceived truths — no matter how seemingly settled — must always be subject to rethinking. The defense of generally accepted views should be based on evidence, not personal denigration of the challengers.

Alas, they never learn. Whether the scientific issue involves climate change, the safety of vaccines, how best to care for children with gender dysphoria, or the alleged scientific support in favor of Darwinian evolution, etc., the scientific establishment continues to brand those who contest their opinions (as a column in Scientific American put it recently) “anti-science” for rejecting “mainstream scientific views.”

That’s Baloney

Stifling the messy and contentious process required for scientific knowledge to advance undermines science. Yes, that means charlatans and frauds may, at times, successfully beguile the ignorant. But just like the most efficacious answer to bad speech is good speech, the way to overcome bad science is for good science to demonstrate its veracity. Attempts to short-circuit that contentious process betray the very purposes science is supposed to serve.