Search This Blog

Friday, 1 March 2024

There is information and then there is Information?

 Shannon and Kolmogorov Information


The first edition of my book The Design Inference as well as its sequel, No Free Lunch, set the stage for defining a precise information-theoretic measure of specified complexity — which is the subject of this series. There was, however, still more work to be done to clarify the concept. In both these books, specified complexity was treated as a combination of improbability or complexity on the one hand and specification on the other. 

As presented back then, it was an oil-and-vinegar combination, with complexity and specification treated as two different types of things exhibiting no clear commonality. Neither book therefore formulated specified complexity as a unified information measure. Still, the key ideas for such a measure were in those earlier books. Here, I review those key information-theoretic ideas. In the next section, I’ll join them into a unified whole.

Let’s Start with Complexity

As noted earlier, there’s a deep connection between probability and complexity. This connection is made clear in Shannon’s theory of information. In this theory, probabilities are converted to bits. To see how this works, consider tossing a coin 100 times, which yields an event of probability 1 in 2^100 (the caret symbol here denotes exponentiation). But that number also corresponds to 100 bits of information since it takes 100 bits to characterize any sequence of 100 coin tosses (think of 1 standing for heads and 0 for tails). 

In general, any probability p corresponds to –log(p) bits of information, where the logarithm here and elsewhere in this article is to the base 2 (as needed to convert probabilities to bits). Think of a logarithm as an exponent: it’s the exponent to which you need to raise the base (here always 2) in order to get the number to which the logarithmic function is applied. Thus, for instance, a probability of p = 1/10 corresponds to an information measure of –log(1/10) ≈ 3.322 bits (or equivalently, 2^(–3.322) ≈ 1/10). Such fractional bits allow for a precise correspondence between probability and information measures.

The complexity in specified complexity is therefore Shannon information. Claude Shannon (1916–2001, pictured above) introduced this idea of information in the 1940s to understand signal transmissions (mainly of bits, but also for other character sequences) across communication channels. The longer the sequence of bits transmitted, the greater the information and therefore its complexity. 

Because of noise along any communication channel, the greater the complexity of a signal, the greater the chance of its distortion and thus the greater the need for suitable coding and error correction in transmitting the signal. So the complexity of the bit string being transmitted became an important idea within Shannon’s theory. 

Shannon’s information measure is readily extended to any event E with a probability P(E). We then define the Shannon information of E as –log(P(E)) = I(E). Note that the minus sign is there to ensure that as the probability of E goes down, the information associated with E goes up. This is as it should be. Information is invariably associated with the narrowing of possibilities. The more those possibilities are narrowed, the more the probabilities associated with those probabilities decrease, but correspondingly the more the information associated with those narrowing possibilities increases. 

For instance, consider a sequence of ten tosses of a fair coin and consider two events, E and F. Let E denote the event where the first five of these ten tosses all land heads but where we don’t know the remaining tosses. Let F denote the event where all ten tosses land heads. Clearly, F narrows down the range of possibilities for these ten tosses more than E does. Because E is only based on the first five tosses, its probability is P(E) = 2^(–5) = 1/(2^5) = 1/32. On the other hand, because F is based on all ten tosses, its probability is P(F) = 2^(–10) = 1/(2^10) = 1/1,024. In this case, the Shannon information associated with E and F is respectively I(E) = 5 bits and I(F) = 10 bits. 

We Also Need Kolmogorov Complexity

Shannon information, however, is not enough to understand specified complexity. For that, we also need Kolmogorov information, or what is also called Kolmogorov complexity. Andrei Kolmogorov (1903–1987) was the greatest probabilist of the 20th century. In the 1960s he tried to make sense of what it means for a sequence of numbers to be random. To keep things simple, and without loss of generality, we’ll focus on sequences of bits (since any numbers or characters can be represented by combinations of bits). Note that we made the same simplifying assumption for Shannon information.

The problem Kolmogorov faced was that any sequence of bits treated as the result of tossing a fair coin was equally probable. For instance, any sequence of 100 coin tosses would have probability 1/(2^100), or 100 bits of Shannon information. And yet there seemed to Kolmogorov a vast difference between the following two sequences of 100 coin tosses (letting 0 denote tails and 1 denote heads):

0000000000000000000000000
0000000000000000000000000
0000000000000000000000000
0000000000000000000000000

and

1001101111101100100010011
0001010001010010101110001
0101100000101011000100110
1100110100011000000110001

The first just repeats the same coin toss 100 times. It appears anything but random. The second, on the other hand, exhibits no salient pattern and so appears random (I got it just now from an online random bit generator). But what do we mean by random here? Is it that the one sequence is the sort we should expect to see from coin tossing but the other isn’t? But in that case, probabilities tell us nothing about how to distinguish the two sequences because they both have the same small probability of occurring. 

Ideas in the Air

Kolmogorov’s brilliant stroke was to understand the randomness of these sequences not probabilistically but computationally. Interestingly, the ideas animating Kolmogorov were in the air at that time in the mid 1960s. Thus, both Ray Solomonoff and Gregory Chaitin (then only a teenager) also came up with the same idea. Perhaps unfairly, Kolmogorov gets the lion’s share of the credit for characterizing randomness computationally. Most information-theory books (see, for instance, Cover and Thomas’s The Elements of Information Theory), in discussing this approach to randomness, will therefore focus on Kolmogorov and put it under what is called Algorithmic Information Theory (AIT). 

Briefly, Kolmogorov’s approach to randomness is to say that a sequence of bits is random to the degree that it has no short computer program that generates it. Thus, with the first sequence above, it is non-random since it has a very short program that generates it, such as a program that simply says “repeat ‘0’ 100 times.” On the other hand, there is no short program (so far as we can tell) that generates the second sequence. 

It is a combinatorial fact (i.e., a fact about the mathematics of counting or enumerating possibilities) that the vast majority of bit sequences cannot be characterized by any program shorter than the sequence itself. Obviously, any sequence can be characterized by a program that simply incorporates the entire sequence and then simply regurgitates it. But such a program fails to compress the sequence. The non-random sequences, by having programs shorter than the sequences themselves, are thus those that are compressible. The first of the sequences above is compressible. The second, for all we know, isn’t.

Kolmogorov’s information (also known as Kolmogorov complexity) is a computational theory because it focuses on identifying the shortest program that generates a given bit-string. Yet there is an irony here: it is rarely possible to say with certainly that a given bit string is truly random in the sense of having no compressible program. From combinatorics, with its mathematical counting principles, we know that the vast majority of bit sequences must be random in Kolmogorov’s sense. That’s because the number of short programs is very limited and can only generate very few longer sequences. Most longer sequences will require longer programs. 

Our Common Experience

But if for an arbitrary bit sequence D we define K(D) as the length of the shortest program that generates D, it turns out that there is no computer program that calculates K(D). Simply put, the function K is non-computable. This fact from theoretical computer science matches up with our common experience that something may seem random for a time, and yet we can never be sure that it is random because we might discover a pattern clearly showing that the thing in fact isn’t random (think of an illusion that looks like a “random” inkblot only to reveal a human face on closer inspection). 

Yet even though K is non-computable, in practice it is a useful measure, especially for understanding non-randomness. Because of its non-computability, K doesn’t help us to identify particular non-compressible sequences, these being the random sequences. Even with K as a well-defined mathematical function, we can’t in most cases determine precise values for it. Nevertheless, K does help us with the compressible sequences, in which case we may be able to estimate it even if we can’t exactly calculate it. 

What typically happens in such cases is that we find a salient pattern in a sequence, which then enables us to show that it is compressible. To that end, we need a measure of the length of bit sequences as such. Thus, for any bit sequence D, we define |D| as its length (total number of bits). Because any sequence can be defined in terms of itself, |D| forms an upper bound on Kolmogorov complexity. Suppose now that through insight or ingenuity, we find a program that substantially compresses D. The length of that program, call it n, will then be considerably less than |D| — in other words, n < |D|. 

Although this program length n will be much shorter than D, it’s typically not possible to show that this program of length n is the very shortest program that generates D. But that’s okay. Given such a program of length n, we know that K(D) cannot be greater than n because K(D) measures the very shortest such program. Thus, by finding some short program of length n, we’ll know that K(D) ≤ n < |D|. In practice, it’s enough to come up with a short program of length n that’s substantially less than |D|. The number n will then form an upper bound for K(D). In practice, we use n as an estimate for K(D). Such an estimate, as we’ll see, ends up in applications being a conservative estimate of Kolmogorov complexity. 

1Corinthians Ch.4:7:The Watchtower society's condensed commentary.

 



Wol.JW.org


Friday, March 1

Why do you boast?​—1 Cor. 4:7.


The apostle Peter urged his brothers to use whatever gifts and talents they had to build up their fellow believers. Peter wrote: “To the extent that each one has received a gift, use it in ministering to one another as fine stewards of God’s undeserved kindness.” (1 Pet. 4:10) We should not hold back from using our gifts to the fullest for fear that others may become jealous or get discouraged. But we must be careful that we do not boast about them. (1 Cor. 4:6) Let us remember that any natural abilities we may have are gifts from God. We should use those gifts to build up the congregation, not to promote ourselves. (Phil. 2:3) When we use our energy and abilities to do God’s will, we will have cause for rejoicing​—not because we are outdoing others or proving ourselves superior to them, but because we are using our gifts to bring praise to JEHOVAH. 

The Origin of Life remains darwinism's achilles heel?

 On the Origin of Life, a Measure of Intelligent Design’s Impact on Mainstream Science


Don’t let anyone tell you that intelligent design isn’t having an impact on the way mainstream scientists are thinking about problems like the origin of life (OOL). David Coppedge points out the “devastating assessment” of OOL that was just published in Nature, the world’s most prestigious science journal. The authors are Nick Lane and Joana Xavier. The latter is a chemist at Imperial College London. As Coppedge notes, she’s been frank in comments about intelligent design and specifically Stephen Meyer’s Signature in the cell.

“One of the Best Books I’ve Read”

From a 2022 conversation with Perry Marshall:

But about intelligent design, let me tell you, Perry, I read Signature in the Cell by Stephen Meyer…And I must tell you, I found it one of the best books I’ve read, in terms of really putting the finger on the questions. What I didn’t like was the final answer, of course. But I actually tell everyone I can, “Listen, read that book. Let’s not put intelligent design on a spike and burn it. Let’s understand what they’re saying and engage.” It’s a really good book that really exposes a lot of the questions that people try to sweep under the carpet….I think we must have a more naturalistic answer to these processes. There must be. Otherwise, I’ll be out of a job.

That is a remarkable statement. Paul Nelson first Noted it at Evolution News. 

Under the Carpet

Dr. Xavier rejects ID, which is fair enough, but recommends an ID book by Dr. Meyer to “everyone I can” because “it really exposes a lot of the questions that people try to sweep under the carpet.” In the book, Meyer finds that, in addressing the origin-of-life puzzle, all current materialist solutions fail. He has a politer way of saying what chemist James Tour does on the same subject.

So that’s September 2022. Now a year and a half later, Xavier is back in the pages of Nature exposing weaknesses in the OOL field as currently constituted. She still holds out for a “more naturalistic answer.” But do you think, in writing about those “questions that people try to sweep under the carpet,” she didn’t have Meyer’s book in the back of her mind? I’m no mind reader, but to me, the question seems self-answering.


Getting fraud down to a science?IV

 

Tuesday, 27 February 2024

The odd couple? II

 Can Evolution and Intelligent Design Work Together in Harmony?


Or is that wishful thinking? On a new episode of ID the Future, host Casey Luskin concludes his conversation with philosopher Stephen Dilley about a recent proposal to marry mainstream evolutionary theory with a case for intelligent design. Dr. Dilley is lead author of a comprehensive critique of Kojonen’s model co-authored with Luskin, Brian Miller, and Emily Reeves and published in the journal Religions.

In the second half of their discussion, Luskin and Dilley explain key scientific problems with Kojonen’s theistic evolutionary model. First up is Kojonen’s acceptance of both convergent evolution and common ancestry, two methods used by evolutionary biologists to explain common design features among different organisms. But if the design can be explained through natural processes, there is little need to invoke intelligent design. After all, the whole point of mainstream evolutionary theory is to render any need for design superfluous.

Dr. Dilley also explains why Kojonen’s model contradicts our natural intuition to detect design. If we look at a hummingbird under Kojonen’s proposal, we are still required to see unguided natural processes at work, the appearance of design without actual intelligent design. Yet we are also supposed to acknowledge that an intelligent designer front-loaded the evolutionary process with the creative power it needs to produce the hummingbird. So is it intelligently designed or isn’t it? The theist on the street is left scratching his or her head.

Download the podcast or listen to it here

Monday, 26 February 2024

On the Syriac Peshitta.

 The Syriac Peshitta—A Window on the World of Early Bible Translations


For nine days in 1892, the twin sisters Agnes Smith Lewis and Margaret Dunlop Gibson journeyed by camel through the desert to St. Catherine’s Monastery at the foot of Mount Sinai. Why would these two women in their late 40’s undertake such a journey at a time when travel in what was called the Orient was so dangerous? The answer may help strengthen your belief in the accuracy of the Bible.

JUST before returning to heaven, Jesus commissioned his disciples to bear witness about him “in Jerusalem, in all Judea and Samaria, and to the most distant part of the earth.” (Acts 1:8) This the disciples did with zeal and courage. Their ministry in Jerusalem, however, soon stirred up strong opposition, resulting in the martyrdom of Stephen. Many of Jesus’ disciples found refuge in Antioch, Syria, one of the largest cities in the Roman Empire, some 350 miles (550 km) north of Jerusalem.—Acts 11:19.

In Antioch, the disciples continued to preach “the good news” about Jesus, and many non-Jews became believers. (Acts 11:20, 21) Though Greek was the common language within the walls of Antioch, outside its gates and in the province, the language of the people was Syriac.

THE GOOD NEWS TRANSLATED INTO SYRIAC

As the number of Syriac-speaking Christians increased in the second century, there arose a need for the good news to be translated into their tongue. Thus, it appears that Syriac, not Latin, was the first vernacular into which parts of the Christian Greek Scriptures were translated.

 By about 170 C.E., the Syrian writer Tatian (c. 120-173 C.E.) combined the four canonical Gospels and produced, in Greek or Syriac, the work commonly called the Diatessaron, a Greek word meaning “through [the] four [Gospels].” Later, Ephraem the Syrian (c. 310-373 C.E.) produced a commentary on the Diatessaron, thus confirming that it was in general use among Syrian Christians.

The Diatessaron is of great interest to us today. Why? In the 19th century, some scholars argued that the Gospels were written as late as the second century, between 130 C.E. and 170 C.E., and thus could not be authentic accounts of Jesus’ life. However, ancient manuscripts of the Diatessaron that have come to light since then have proved that the Gospels of Matthew, Mark, Luke, and John were already in wide circulation by the middle of the second century. They must therefore have been written earlier. In addition, since Tatian, when compiling the Diatessaron, did not make use of any of the so-called apocryphal gospels in the way he did the four accepted Gospels, it is evident that the apocryphal gospels were not viewed as reliable or canonical.

By the start of the fifth century, a translation of the Bible into Syriac came into general use in northern Mesopotamia. Likely made during the second or third century C.E., this translation included all the books of the Bible except 2 Peter, 2 and 3 John, Jude, and Revelation. It is known as the Peshitta, meaning “Simple” or “Clear.” The Peshitta is one of the oldest and most important witnesses to the early transmission of the Bible text.

Interestingly, one manuscript of the Peshitta has a written date corresponding to 459/460 C.E., making it the oldest Bible manuscript with a definite date. In about 508 C.E., a revision of the Peshitta was made that included the five missing books. It came to be known as the Philoxenian Version.


Syriac Peshitta of the Pentateuch, 464 C.E., the second-oldest dated manuscript of Bible text

Until the 19th century, almost all the known Greek copies of the Christian Greek Scriptures were from the fifth century or much later. For this reason, Bible scholars were especially interested in such early versions as the Latin Vulgate and the Syriac Peshitta. At the time, some believed that the Peshitta was the result of a revision of an older Syriac version. But no such text was known. Since the roots of the Syriac Bible go back to the second century, such a version would provide a window on the Bible text at an early stage, and it would surely be invaluable to Bible scholars! Was there really an old Syriac version? Would it be found?


The palimpsest called the Sinaitic Syriac. Visible in the margin is the underwriting of the Gospels

Yes, indeed! In fact, two such precious Syriac manuscripts were found. The first is a manuscript dating from the fifth century. It was among a large number of Syriac manuscripts acquired by the British Museum in 1842 from a monastery in the Nitrian Desert in Egypt. It was called the Curetonian Syriac because it was discovered and published by William Cureton, the museum’s assistant keeper of manuscripts. This precious document contains the four Gospels in the order of Matthew, Mark, John, and Luke.

The second manuscript that has survived to our day is the Sinaitic Syriac. Its discovery is linked with the adventurous twin sisters mentioned at the start of this article. Although Agnes did not have a university degree, she learned eight foreign languages, one of them Syriac. In 1892, Agnes made a remarkable discovery in the monastery of St. Catherine in Egypt.

 There, in a dark closet, she found a Syriac manuscript. According to her own account, “it had a forbidding look, for it was very dirty, and its leaves were nearly all stuck together through their having remained unturned” for centuries. It was a palimpsest * manuscript of which the original text had been erased and the pages rewritten with a Syriac text about female saints. However, Agnes spotted some of the writing underneath and the words “of Matthew,” “of Mark,” or “of Luke” at the top. What she had in her hands was an almost complete Syriac codex of the four Gospels! Scholars now believe that this codex was written in the late fourth century.

The Sinaitic Syriac is considered one of the most important Biblical manuscripts discovered, right along with such Greek manuscripts as the Codex Sinaiticus and the Codex Vaticanus. It is now generally believed that both the Curetonian and Sinaitic manuscripts are extant copies of the old Syriac Gospels dating from the late second or early third century.

“THE WORD OF OUR GOD ENDURES FOREVER”

Can these manuscripts be useful to Bible students today? Undoubtedly! Take as an example the so-called long conclusion of the Gospel of Mark, which in some Bibles follows Mark 16:8. It appears in the Greek Codex Alexandrinus of the fifth century, the Latin Vulgate, and elsewhere. However, the two authoritative fourth-century Greek manuscripts—Codex Sinaiticus and Codex Vaticanus—both end with Mark 16:8. The Sinaitic Syriac does not have this long conclusion either, adding further evidence that the long conclusion is a later addition and was not originally part of Mark’s Gospel.

Consider another example. In the 19th century, almost all Bible translations had a spurious Trinitarian addition at 1 John 5:7. However, this addition does not appear in the oldest Greek manuscripts. Neither does it appear in the Peshitta, thus proving that the addition at 1 John 5:7 is indeed a corruption of the Bible text.

Clearly, as promised, Jehovah God has preserved his Holy Word. In it we are given this assurance: “The green grass dries up, the blossom withers, but the word of our God endures forever.” (Isaiah 40:8; 1 Peter 1:25) The version known as the Peshitta plays a humble but important role in the accurate transmission of the Bible’s message to all of humanity.

The big questions remain as big as ever?

 A New Look at Three Deep Questions


Ron Coody’s new book, Almost? Persuaded! Why Three Great Questions Resist Certainty, delivers a wide-ranging discussion and analysis of questions, answers, and arguments keenly relevant to the intelligent design community. His background is far from one-dimensional and he has long been engaging people over issues of worldview, evidence, and belief.

With a bachelor’s degree in microbiology and a Master of Divinity followed by a PhD in missiology, Coody is well qualified to address the cutting edges of science, philosophy, and theology. Enhancing his perception of diverse ways of thinking about these questions is his decades-long experience of living and working cross-culturally.

Questions of Consequence

The primary questions addressed here are obviously of deep consequence: Does God exist? Where did life come from? and Is free will real? A refreshing aspect of Almost? Persuaded! is its objective coverage of the broad range of arguments surrounding these questions. 

As I read Almost? Persuaded!, although I have been studying these questions for many years, I found that Coody’s presentation easily held my attention. Moreover, the breadth of his analysis provides new insights and expanded my understanding of developments in history and philosophy.

A Helpful Compendium

On the first question, “Does God Exist?”, Coody’s analytical summary of key philosophers and intellectuals, from Plato to Aquinas to Dawkins, caught my attention. His highlighting of key ideas from over twenty influential thinkers makes for a helpful compendium.

A familiar-sounding argument for design is Coody’s summary of number five of Thomas Aquinas’ Five Ways argument from the 13th century:

Working backwards from human experience of designing and building, Aquinas reasoned that the ordered universe and the creatures inhabiting it exhibit properties of design. Design requires a designer….Aquinas thought that the universe needed an intelligent mind to bring it into order. He believed that physical laws lacked the power to organize complex, functioning systems. 

P. 34

Another unique and somewhat amusing contribution is the author’s contrasting of Richard Dawkins with the Apostle Paul on the evidential weight of nature.

As Coody reviews the standard evidence for the fine-tuning of the physical parameters of the universe to allow life to exist, his presentation is accurate and compelling. The Big Bang, Lawrence Krauss’s attempts to redefine the “nothingness” out which the universe arose, Stephen Hawking’s blithe dismissal of the significance of the beginning with an invocation of gravity, and the counterpoint from Borde, Guth, and Vilenkin’s singularity theorem, are knit together in readable prose.

Encouragement for Curiosity

When it comes to the possibility of life forming itself naturally, again Coody gives an informative and insightful overview. Although, like the rest of us, he has his own convictions, he is willing to acknowledge the tension surrounding differing conclusions among those seeking to evaluate the evidence. He encourages the reader to persist in seeking answers: “Honest people of any faith or no faith should be interested in the truth. ” (p. 164)

The final section provides an enlightening discussion of free will. Coody captures the major issues: “Is free will an illusion created by the brain? In reality do we have any more free will than our computer?….Is the mind the same as the brain or is the mind something spiritual?” (p. 180)

Delving into the implications of materialistic determinism, and even quantum uncertainty, Coody provides a fresh look at the subject. In an illustration that is beguilingly simple, he borrows from the classic fairy tale of Pinocchio. His summary cuts deeply into one of the major shortcomings of materialist thinking: “On their view of the world, there was never any difference between the wooden Pinocchio and the human Pinocchio. Both were simply animated, soulless, material objects.” (p. 191)

Readers of almost any background will find much here that informs, provokes deeper reflection, and provides refreshing and novel illustrations relevant to the discussion of some of life’s most enduring questions

There is nothing simple about this beginning?

 Getting It Together: Tethers, Handshakes, and Multitaskers in the Cell


Running a cell requires coordination. How do molecules moving in the dark interior of a cell know how and when to connect? Protein tethers offer new clues, according to research at Philipps University in Marburg, Germany.

The ways that organelles and proteins connect at the right place and time are coming to light. One method is to encapsulate interacting molecules within compartments called condensates, droplets, and speckles. Like offices or cubicles where employees can talk without excess noise, these temporary spaces allow molecules to interact in peace (see “Caltech Finds Amazing Role for Noncoding DNA”). 

Another method for coordination of moving parts involves tethers. Certain molecular machines use “two hands” to bring other molecules or organelles together. Visualize a person taking a stranger’s hand and using her other hand to grasp a doorknob, leading the stranger to the place he needs to be. Many protein machines have a critical binding site for their targets, but these “dual affinity” tethering machines contain two different recognition sites on different domains that recognize separate targets needing to come together. Such multitasking machines are marvelously designed to promote fellowship for effective interactions in the cellular city.

A similar phenomenon has long been known in DNA translation. A set of molecules called aminoacyl-tRNA synthetases brings dissimilar molecules together. One synthetase feels the anticodon on its matching transfer RNA (tRNA) and then puts the corresponding amino acid on the opposite end. Like a language translator, each synthetase needs to know two languages — the DNA code and the protein code — to equip the tRNA with the correct amino acid. As the activated tRNA enters the ribosome, its anticodon base pairs with the complementary codon on the messenger RNA at one end, and its amino acid fits onto the growing polypeptide chain on the other end. This is a spectacular example of double duty, multitasking know-how. But is it the only one?

Another Example of Double Duty

A team of 15 researchers publishing in PLOS Biology under lead author Elena Bittner, also from Philipps University, and colleagues at Berkeley and Howard Hughes, has just reported a case of a multitasking machine that bridges dissimilar targets — in this case, peroxisomes with mitochondria or the endoplasmic reticulum (ER). It may not be the only case of “Proteins that carry dual targeting signals [that] can act as tethers between” organelles, they say:

Peroxisomes are organelles with crucial functions in oxidative metabolism. To correctly target to peroxisomes, proteins require specialized targeting signals. A mystery in the field is the sorting of proteins that carry a targeting signal for peroxisomes and as well as for other organelles, such as mitochondria or the endoplasmic reticulum (ER). Exploring several of these proteins in fungal model systems, we observed that they can act as tethers bridging organelles together to create contact sites. 

Take note that they found this in yeast, the simplest of eukaryotes.

We show that in Saccharomyces cerevisiae this mode of tethering involves the peroxisome import machinery, the ER–mitochondria encounter structure (ERMES) at mitochondria and the guided entry of tail-anchored proteins (GET) pathway at the ER. 

Why is this significant? 

Our findings introduce a previously unexplored concept of how dual affinity proteins can regulate organelle attachment and communication.

Previously unexplored: this sounds like a game changer. How does this “tethering” system work? After all the biochemistry work by the team is shown, demonstrating the dual-targeting capability, they illustrated it with a simplified diagram in Figure 10 in their open-access paper. As usual, even in simplified form, the system involves numerous other factors. The upshot is described as follows:

We have found that distinct proteins with targeting signals for 2 organelles can affect proximity of these organelles. This conclusion is supported by the notion that different types of dual affinity proteins can act as contact-inducing proteins (Fig 10) … Although dual affinity proteins are a challenge for maintaining organelle identity, they are ideally suited to support organelle interactions by binding to targeting factors and membrane-bound translocation machinery of different organelles. Dually targeted proteins appear to concentrate in regions of organelle contact, which may coincide with regions of reduced identity.

Within the mitochondria, we already met TIM and TOM, the channel guards who check the credentials of proteins entering the organelle’s outer and inner membranes. (The authors note that these translocase proteins are “evolutionarily conserved.”) But outside the mitochondrion, proteins needing to enter or exit have to find their way to the guards. That’s where the “dual affinity proteins” operate. 

What Do the Tethers Look Like?

Ptc5 is one of these tethering proteins, one of many that “contain targeting signals for mitochondria and peroxisomes at opposite termini.” Its Peroxisome Targeting Signal (PTS) recognizes the peroxisome at one end, and its Mitochondrial Targeting Signal (MTS) recognizes TOM at the mitochondrial channel. Experimenting with mutant strains of this and associated proteins and chaperones, the researchers confirmed that Ptc5 does tether peroxisomes to mitochondria. Moreover, its activity is dependent on need. “In aggregate,” they write, these data show that tethering via dual affinity proteins is a regulated process and depends on the metabolic state of the cell.” This implies the additional capability of sensing the fluctuating metabolic need.

The authors didn’t have much to say about evolution. As usual, it involved copious amounts of speculation.

While many peroxisomal membrane proteins can target peroxisomes without transitioning through the ER, several peroxisomal membrane proteins have evolved to be synthesized in vicinity to the ER and may translocate from it.

Other than TOM and TIM being “evolutionarily conserved,” that was all they had to offer Darwin.

A New Class of Activity Coordinators

What Bittner et al. have identified is probably the trigger for a paradigm shift concerning methods that cells use to get components together.

We conclude that dually targeted cargo includes a diverse and unexpected group of tethers, which are likely to maintain contact as long as they remain accessible for targeting factors at partner organelles. Coupling of protein and membrane trafficking is a common principle in the secretory pathway and it might also occur for peroxisomes at different contact sites.

And so, what lies ahead? Design proponents in biochemistry and molecular biology, play tetherball! Here is a potentially fruitful area for new discoveries.

How dually targeted proteins and their rerouting affect the flux of molecules other than proteins, e.g., membrane lipids remains a topic for future research. 




Sunday, 25 February 2024

Getting fraud down to a science? III

 

Mind is absolutely over matter?

 

The odd couple?

 Can Evolution and Intelligent Design Be Happily Wedded?


On a new episode of ID the Future, host Casey Luskin kicks off a series of interviews responding to theologian Dr. Rope Kojonen’s proposal that front-loaded intelligent design and a full-blooded evolutionary process worked together in harmony to produce the diversity of life we find on Earth. Here, Dr. Luskin interviews Dr. Stephen Dilley, lead author of a comprehensive critique of Kojonen’s model, co-authored with Luskin, Brian Miller, and Emily Reeves and published in the journal Religions.

In the first half of a conversation, Luskin and Dilley describe Dr. Kojonen’s proposal in a nutshell, providing the philosophical framing needed to grasp Kojonen’s elegant but flawed argument. Kojonen’s idea is the ultimate front-loaded design model, allowing for evolutionary mechanisms to work themselves out, but within a careful and purposeful arrangement of finely tuned preconditions and laws of form. Seemingly, t’s the best of both worlds: empirically detectable design within a fully natural evolutionary process. 

But there’s a problem. The fine-tuning Kojonen claims is baked into evolutionary processes is actually not there. The sequence space for amino acids to come together to form functional proteins has been found to be exceedingly rare as well as isolated. We don’t find evidence of fine-tuning within the mutation/selection mechanism. Instead, we find a process limited in its creative power that cannot have produced the complexity and information-rich innovation necessary to bring about life’s biological diversity. As Luskin puts it, “He [Kojonen] is arguing that God had to stack the deck in favor of evolution in order to get it to work.” It’s an interesting thesis, and Kojonen is serious and scholarly in his approach to the problem. But in the end, it fails on scientific grounds.

Download the podcast or listen to it here.

Yet more confirming of the humanity of ancient humans.

 Burials Reveal Prehistoric Cultures Valued Children with Down Syndrome


We’ve all probably heard from one pundit or another that prehistoric humans discarded children with disabilities, just as animals might. Well, recently, researchers screened the DNA of 10,000 ancient humans (historic and prehistoric) for evidence of genetically detectible syndromes like Down sydrome. According to their report in Nature, “We find clear genetic evidence for six cases of trisomy 21 (Down syndrome) and one case of trisomy 18 (Edwards syndrome), and all cases are present in infant or perinatal burials.”

Clearly, people with significant genetic disorders could not expect a long life back then. But the researchers were surprised by the respect shown to the deceased children: “Notably, the care with which the burials were conducted, and the items found with these individuals indicate that ancient societies likely acknowledged these individuals with trisomy 18 and 21 as members of their communities, from the perspective of burial practice.”

The five prehistoric burials were all located within settlements and in some cases accompanied by special items such as colored bead necklaces, bronze rings or sea-shells. “These burials seem to show us that these individuals were cared for and appreciated as part of their ancient societies,” says [Adam] Rohrlach, the lead author of the study.

MAX PLANCK SOCIETY, “ANCIENT GENOMES REVEAL DOWN SYNDROME IN PAST SOCIETIES,” PHYS.ORG, FEBRUARY 20, 2024 > THE PAPER IS OPEN ACCESS VIA A SHAREIT TOKEN

Down syndrome (an extra whole or partial copy of the 21st chromosome, hence trisomy 21 ) is comparatively common (1/1,000 births). Edwards syndrome — three copies of chromosome 18 — occurs in 1/3,000 births.

Five of these burials of children with Down syndrome date to between 5,000 and 2,500 years before the present, in settled communities. An interesting feature is that the infants were buried inside houses:

“At the moment, we cannot say why we find so many cases at these sites,” says Roberto Risch, an archaeologist of the Universitat Autònoma de Barcelona working on intramural funerary rites, “but we know that they belonged to the few children who received the privilege to be buried inside the houses after death. This already is a hint that they were perceived as special babies.”

MAX PLANCK SOCIETY, “IN PAST SOCIETIES“

“A Surprise to Us”

In an article at The Conversation, researchers Adam “Ben” Rohrlach and Kay Prüfer comment,

The fact that three cases of Down syndrome and the one case of Edwards syndrome were found in just two contemporaneous and nearby settlements was a surprise to us.

“We don’t know why this happened,” says our co-author Roberto Risch, an archaeologist from The Autonomous University of Barcelona. “But it appears as if these people were purposefully choosing these infants for special burials.” “

ANCIENT DNA REVEALS CHILDREN WITH DOWN SYNDROME IN PAST SOCIETIES. WHAT CAN THEIR BURIALS TELL US ABOUT THEIR LIVES?,” FEBRUARY 20, 2024

Generally, when people are buried inside a home (floor burials), they are thought to be good, not bad, in some way. The sixth such burial was in a church graveyard in Finland, dated to the 17th–18th century

Why Were the Researchers So Surprised?

The researchers may be startled that the children were treated as members of the community because today considerable effort is made to identify children with Down syndrome prenatally — and most of them are aborted.

But perhaps Wayne Gretzky (in hockey, the legendary Great One) would be less surprised. In 1981, he met and developed a friendship with teenage Joey Moss (1963–2020) who had Down syndrome. In 1984, he got him a job as a locker room attendant with the Edmonton Oilers. Moss took to League life very well. An ardent fan and great favorite, he was inducted into the Alberta Sports Hall of Fame in 2003. He also received the National Hockey League Alumni Association’s Seventh Man award that year, for those “whose behind-the-scenes efforts make a difference in the lives of others.

A YouTube commenter writes, “I still tear up when I think of what we lost in Joey. He totally changed the way I deal with handicapped people. Clearly, his name must be in the rafters.”

Gretzky told People Magazine in 2016, “The people of Edmonton have accepted Joey as an everyday person without any sort of handicap and that’s what’s really special about his story.” Meanwhile, Gretzky himself raised money through golf tournaments to build more group homes for people who live with Down syndrome as adults — something that, of course, didn’t happen much in remote antiquity when almost all life expectancies were short.

If we don’t give people like Joey a chance, perhaps we haven’t advanced beyond our ancestors as much as we think, apart from our better living conditions.




Saturday, 24 February 2024

The king of titans holds court.

 

Getting fraud down to a Science?II

 Data Can Appear in Science Journals — Out of Thin Air


Recently, Retraction Watch, a site that helps keeps science honest, noted some statistical peculiarities about a paper last September in the Journal of Clean Energy, “Green innovations and patents in OECD countries.” The site was tipped off by a PhD student in economics that “For several countries, observations for some of the variables the study tracked were completely absent.”

But That Wasn’t the Big Surprise

The big surprise was when the student wrote to one of the authors:

In email correspondence seen by Retraction Watch and a follow-up Zoom call, [Almas] Heshmati told the student he had used Excel’s autofill function to mend the data. He had marked anywhere from two to four observations before or after the missing values and dragged the selected cells down or up, depending on the case. The program then filled in the blanks. If the new numbers turned negative, Heshmati replaced them with the last positive value Excel had spit out. “No data? No problem!” …

But it got worse. Heshmati’s data, which the student convinced him to share, showed that in several instances where there were no observations to use for the autofill operation, the professor had taken the values from an adjacent country in the spreadsheet. New Zealand’s data had been copied from the Netherlands, for example, and the United States’ data from the United Kingdom.

 UNDISCLOSED TINKERING IN EXCEL BEHIND ECONOMICS PAPER,” RETRACTION WATCH, FEBRUARY 5, 2024

“It’s Pretty Egregious”

While many researchers decried the results, University of Copenhagen econometrician Søren Johansen said something worth pondering: “The reason it’s cheating isn’t that he’s done it, but that he hasn’t written it down,” adding, “It’s pretty egregious.”

Pomona College business prof Gary Smith weighed in at Retraction Watch, explaining how blanks can come to seem like information in statistical papers.

Imputation (the technique the authors were using), he says, is not always unfair: “If we are measuring the population of an area and are missing data for 2011, it is reasonable to fit a trend line and, unless there has been substantial immigration or emigration, use the predicted value for 2011. Using stock returns for 2010 and 2012 to impute a stock return for 2011 is not reasonable.” In other words, whether imputation is unfair depends on whether anything was likely to have happened in the period for which data is missing that would change the results. 

Another Story

But, he says, the way the authors of the controversial paper were using the technique was another story:

The most extreme cases are where a country has no data for a given variable. The authors’ solution was to copy and paste data for another country. Iceland has no MKTcap data, so all 29 years of data for Japan were pasted into the Iceland cells. Similarly, the ENVpol (environmental policy stringency) data for Greece (with six years imputed) were pasted into Iceland’s cells and the ENVpol data for Netherlands (with 2013-2018 imputed) were pasted into New Zealand’s cells. The WASTE (municipal waste per capita) data for Belgium (with 1991-1994 and 2018 imputed) were pasted into Canada. The United Kingdom’s R&Dpers (R&D personnel) data were pasted into the United States (though the 10.417 entry for the United Kingdom in 1990 was inexplicably changed to 9.900 for the United States).

The copy-and-pasted countries were usually adjacent in the alphabetical list (Belgium and Canada, Greece and Iceland, Netherlands and New Zealand, United Kingdom and United States), but there is no reason an alphabetical sorting gives the most reasonable candidates for copying and pasting. Even more troubling is the pasting of Japan’s MKTcap data into Iceland and the simultaneous pasting of Greece’s ENVpol data into Iceland. Iceland and Japan are not adjacent alphabetically, suggesting this match was chosen to bolster the desired results. 

GARY SMITH, “HOW (NOT) TO DEAL WITH MISSING DATA: AN ECONOMIST’S TAKE ON A CONTROVERSIAL STUDY, RETRACTION WATCH, FEBRUARY 21, 2024

He concludes, “There is no justification for a paper not stating that some data were imputed and describing how the imputation was done.”

What Counts as Science

Perhaps Elsevier, the journal publishers, agree with his view. Retraction Watch announced that Elsevier, the journal’s publisher, would retract the paper:

As we reported earlier this month, Almas Heshmati of Jönköping University mended a dataset full of gaps by liberally applying Excel’s autofill function and copying data between countries – operations other experts described as “horrendous” and “beyond concern.” …

Elsevier, in whose Journal of Cleaner Production the study appeared, moved quickly on the new information. A spokesperson for the publisher told us yesterday: “We have investigated the paper and can confirm that it will be retracted.” 

“EXCLUSIVE: ELSEVIER TO RETRACT PAPER BY ECONOMIST WHO FAILED TO DISCLOSE DATA TINKERING,” RETRACTION WATCH, FEBRUARY 22, 2024

If Elsevier doesn’t end up retracting the paper, that will certainly say something about what counts as science today.

Note: As noted above, the first author of the paper, Almas Heshmati, was the one originally interviewed by the student. The second author, Mike Tsionas, died recently.

"Settled Science" vs. Actual science.

Stifling Opposition Is the Real “Anti-Science”


The advancement of science is one of mankind’s greatest triumphs. And who could be against it? Deploying the raw power of rational analysis, science exponentially increases our understanding of the natural world and leads to wonderous applications to improve the human condition.

But these days, science has become something of a divisive concept. It’s not that most people reject the scientific method or science’s many achievements. Rather, because some in the scientific establishment co-opt the term “science” as a means of exerting control over policy or to further favored ideological agendas, trust in the scientific sector is deflating.

You know the types. They can be seen regularly on cable TV claiming righteously that “the science is settled” about the rightness of their opinions — for example, the medical propriety of “affirming” gender confusion in children with puberty blockers. Then, they deploy the pejorative “anti-science” against those who disagree to stifle other perspectives.

The Antithesis of Science

But shutting critics up is the antithesis of science, properly understood. Indeed, stifling opposition is the real “anti-science” because it betrays the fundamental precepts of the scientific method, an approach to learning that requires continual argumentation, (sometimes bitter) disagreements, and the never-ending willingness to challenge accepted orthodoxies. In this sense, “the science” is never “settled” but always open to revised understandings. Otherwise, science mutates into dogma, which suppresses the pursuit of knowledge. Indeed, sometimes that is the point.

Examples of once-unquestioned “truths” overturned by subsequent discoveries are legion. Here’s a recent example. Biologists used to believe that the human appendix was a useless vestigial organ. But because science is dynamic, this once uncontroversial perspective was challenged. And what do you know? “Science” has now discovered at least two valuable purposes for the appendix: it supports the body’s immune system and serves as a “bank” of sorts for storing beneficial gut bacteria.

Now, imagine if the scientists who worked to attain a better understanding of the appendix had been prevented from exploring that subject because the “scientific consensus” had determined previously that the organ had no beneficial purpose. What if the self-appointed guardians of perceived medical wisdom had dissuaded researchers from pursuing their investigations for fear of losing university tenure, being scorned by colleagues, or having research funding blocked? Valuable knowledge would have been lost. New medical approaches for treating an infected appendix would never be developed. The mistaken scientific understanding would have remained, yes, “settled.”

The Costs of “Settled Science”

Alas, these days the science establishment too often engages in just such censorship when it involves controversial scientific issues. We saw that on full display during the COVID-19 pandemic. When three noted epidemiologists (pictured above) questioned the wisdom of societal shutdowns and keeping children out of school, in the Great Barrington Declaration (GBD), rather than engage its content — as would have been the proper scientific approach — the public health establishment instead attempted to destroy the messengers. For example, then-National Institutes of Health director Francis Collins slandered the authors as “fringe,” and Anthony Fauci worked to undermine the GBD in the media. One of the authors, Stanford University professor Dr. Jay Bhattacharya, even found himself scorned by his own academic community for contesting the “settled science.”

Funny that. In the end, the GBD proved to have the better argument, illustrating the terrible harm that can be caused by stifling the scientific method and suppressing dissenting views.

Or consider the hot-button topic of evolution. For decades public spokespersons for the scientific establishment have insisted that the contemporary theory of evolution is unchallengeable. Oxford evolutionary biologist Richard Dawkins even went so far as to claim that “if you meet somebody who claims not to believe in evolution, that person is ignorant, stupid or insane (or wicked, but I’d rather not consider that).” Talk about chilling open scientific inquiry!

Yet, in 2016, a group of leading evolutionary and cell biologists convened a conference at the Royal Society in London. Many scientists who attended openly called for a new theory of evolution because of their increasing doubts about the supposed creative power of Darwin’s mechanism of natural selection. Are all these scientists “ignorant, stupid or insane”? Of course not. They are simply “doing science.”

The same vituperative anti-science approach to stifling critics was pursued by the scientific establishment during the embryonic stem cell debate between 2001 and 2008. After President George W. Bush funded embryonic stem cell research but also placed modest federal funding limitations on the experiments, he and supporters of his policy were accused of imposing their religious beliefs against “the gold standard” of regenerative medicine that could soon allow disabled people to throw away their wheelchairs. Scientific arguments that adult stem cells offered the better hope of developing treatments for a wide array of medical conditions were similarly attacked.

The Proof Is in the Pudding

More than twenty years later, what do we see? Embryonic stem cell research was mostly hype. There is not one FDA-approved treatment using embryonic stem cells. Meanwhile, adult stem cells are used to treat a wide array of pathologies. In other words, despite all the name-calling and screeching about interference with the scientific consensus, the heterodox theorists were right.

That isn’t always true, of course. Established views frequently prove correct when challenged. But that isn’t the point. What matters is that for science to be “science,” perceived truths — no matter how seemingly settled — must always be subject to rethinking. The defense of generally accepted views should be based on evidence, not personal denigration of the challengers.

Alas, they never learn. Whether the scientific issue involves climate change, the safety of vaccines, how best to care for children with gender dysphoria, or the alleged scientific support in favor of Darwinian evolution, etc., the scientific establishment continues to brand those who contest their opinions (as a column in Scientific American put it recently) “anti-science” for rejecting “mainstream scientific views.”

That’s Baloney

Stifling the messy and contentious process required for scientific knowledge to advance undermines science. Yes, that means charlatans and frauds may, at times, successfully beguile the ignorant. But just like the most efficacious answer to bad speech is good speech, the way to overcome bad science is for good science to demonstrate its veracity. Attempts to short-circuit that contentious process betray the very purposes science is supposed to serve.


Yet another of the fossil record's explosions.

 Fossil Friday: The Big Bang of Tertiary Birds and a Phylogenetic Mess


This Fossil Friday we look into the abrupt origin of birds, which is just one of the many discontinuities in the fossil record of life on Earth. The image features a fossil bird of the genus Rhychaetites from the famous Eocene Messel pit in Germany. It is similar and also related to modern ibises.

While feathered dinosaurs and primitive toothed birds were abundant during the Cretaceous period, only the chicken and duck clade (Galloanserae) appeared in the Late Cretaceous (Field et al. 2020), while all the other groups of modern birds (Neoaves) appeared suddenly and with great diversity in the Lower Tertiary (today called Paleogene). Indeed, modern crown group birds appear and diversify so abruptly that it has been called a “Big Bang of Tertiary birds” by some paleo-ornithologists (Feduccia 1995, 2003a, 2014, Ksepka et al. 2017). Some of their colleagues did not like such an explosive view for obvious reasons (e.g. Dyke 2003, van Tuinen et al. 2003), but Alan Feduccia addressed and rebutted all critics (Feduccia 2003b), and emphasized that “a rapid, explosive Tertiary radiation best explains why resolving phylogenetic relationships of modern orders remains intractable.” James (2005) reviewed the Paleogene fossil record of birds and found that

before the Paleogene, fossils of putative neornithine birds are sparse and fragmentary (Hope 2002), and their phylogenetic placement is all the more equivocal. … The weak molecular genetic signal found so far for relationships among many higher-level taxa of birds could be explained if there was an early, explosive radiation of birds into diverse ecological niches. … Perhaps the greatest unsolved problem in avian systematics is the evolutionary relationships among modern higher-level taxa.

Rocks vs Clocks

Molecular clock studies, which suggested that modern birds might have originated more early in the Cretaceous, were thoroughly rejected as incompatible with the fossil record (Benton 1999), which could rather suggest that the molecular clock is running faster during the phases of rapid diversification in the major radiations. Nevertheless, van Tuinen (2009) estimated for the Timetree of Life that Neoaves initially diversified already 95 million years ago, followed by another diversification 87-75 million years ago and in the Tertiary (van Tuinen 2009). The author hoped that “more Cretaceous and Paleocene fossil material” may resolve the conflict but admitted that “phylogenetic resolution among the main divergences within Neoaves continues to remain a major hurdle, with most neoavian orders appearing to have diverged in close succession … indicating a rapid evolutionary radiation.” Six years later new fossil discoveries did not come to rescue yet: A fossil calibrated time line of animal evolutionary history (Benton et al. 2015; also see Fossil Calibration Database) suggested an age 86.8-60.2 million years for crown group Neoaves, even though the authors explicitly acknowledged the Paleocene penguin Waimanu from New Zealand as oldest unequivocal neoavian fossil record. Clearly, the molecular clock studies still do not agree with the empirical data of paleontological research.

A few scientists claimed that the problem can be resolved, such as the study by Ericson et al. (2006), which presented “the first well-resolved molecular phylogeny for Neoaves, together with divergence time estimates calibrated with a large number of stratigraphically and phylogenetically well-documented fossils.” According to these authors their results “do not contradict palaeontological data and show that there is no solid molecular evidence for an extensive pre-Tertiary radiation of Neoaves.” However, their result was quickly critiqued and refuted by Brown et al. (2007), who found that “nuclear DNA does not reconcile ‘rocks’ and ‘clocks’ in Neoaves”. They mentioned that “the discrepancy between fossil- and molecular-based age estimates for the diversification of modern birds has persisted despite increasingly large datasets on both sides”, and their reanalysis of Ericson’s data documented “that there is no reliable molecular evidence against an extensive pre-Tertiary radiation of Neoaves.” In the same year Zhang (2007) confirmed that “paleontological studies showed that modern avian groups probably first appeared in the Paleocene and experienced an explosive radiation in the early Cenozoic.”Brown et al. (2008) called this problem the “rock-clock gap” and said that “determining an absolute timescale for avian evolutionary history has proven contentious. The two sources of information available, paleontological data and inference from extant molecular genetic sequences (colloquially, ‘rocks’ and ‘clocks’), have appeared irreconcilable; … These two sources of data therefore appear to support fundamentally different models of avian evolution.” Their own study of mitochondrial DNA did “fail to reconcile molecular genetic divergence time estimates with dates taken from the fossil record; instead, we find strong support for an ancient origin of modern bird lineages.” Thus, the problem turned out to be quite stubborn and refused to go away with more data. On the contrary, each new study reenforced the problem. For example, the attempt by Pratt et al. (2008) to resolve the deep phylogeny of Neoaves produced molecular datings from mitochondrial genomes that “support a major diversification of at least 12 neoavian lineages in the Late Cretaceous.” Another example is the study by Pacheco et al. (2011), who used several molecular dating approaches and conservative calibration points, but still “found time estimates slightly younger than those reported by others, most of the major orders originated prior to the K/T boundary.” But even more interestingly, these authors revealed the secret reason why so many evolutionary biologists do not like the Big Bang model: “proponents of this hypothesis do not provide viable genetic mechanisms for those changes” (Pacheco et al. 2011). In other words, if there were such Big Bangs then Darwinism cannot plausibly explain them. This is why these abrupt appearances in the history of life fascinate me and will be subject of my book project called “The Big Bangs of Life.”

Phylogenomics vs Clocks

But it gets worse. Not just that rocks and clocks conflicted, but phylogenomic studies increasingly supported the Big Bang of Tertiary birds so that now molecular trees conflicted with molecular clocks. The Big Bang view was most strongly confirmed by the seminal study of Jarvis et al. (2014), a genome scale phylogenetic analysis by more than 100 authors (!), who found that “even with whole genomes, some of the earliest branches in Neoaves proved challenging to resolve, which was best explained by massive protein-coding sequence convergence and high levels of incomplete lineage sorting that occurred during a rapid radiation after the Cretaceous-Paleogene mass extinction event about 66 million years ago.” This result was widely reported by the popular science media with sensational headlines about the mapping of the “‘Big Bang’ of Bird Evolution” (AMNH 2014, Duke University 2014, BGI Shenzen 2014, Smithsonian Insider 2014), or as Time Magazine titled “There was a Big Bang for Birds” (Kluger 2014), or “Rapid bird evolution after the age of dinosaurs unprecedented, study confirms” (University of Sydney 2014). Casey Luskin (2014) then also reported for Evolution News how this “massive genetic study confirms birds arose in Big Bang-type of explosion.”

Another comprehensive phylogenetic study more recently again confirmed such an extremely rapid “major radiation of crown birds in the wake of the Cretaceous–Palaeogene (K–Pg) mass extinction” (Prum et al. 2015; also see Wink et al. 2023 for a perfect visualization of Prum’s results). Claramunt & Cracraft (2015) “combined DNA sequences of clock-like genes for most avian families with 130 fossil birds to generate a new time tree for Neornithes” and concluded that “it was not until the Cretaceous-Paleogene transition (66 million years ago) that Neornithes began to diversify rapidly around the world.” Brusatte et al. (2015) concluded in their review article on the origin and diversification of birds that “after the mass extinction, modern birds (members of the avian crown group) explosively diversified, culminating in more than 10,000 species distributed worldwide today.” Braun et al. (2019) still acknowledged that Neoaves “appears to have undergone a rapid radiation near the end Cretaceous mass extinction (the K-Pg boundary).” Looks like a solid scientific consensus, but not so fast. After all, we are dealing with evolutionary biology, where almost anything can happen.

A New Study

Indeed, this month a new paper by Wu et al. (2024) came to a totally different result from the consensus of virtually all previous studies. The press release (Yirka 2024) says that this “new study suggests birds began diversifying long before dinosaurs went extinct” and “the research team found evidence that the Neoaves divergence path began long before the asteroid struck.” The team of mainly Chinese authors analyzed the genomes of hundreds of species of birds and arrived at a new tree of Neoaves. The authors concluded that “the evolution of modern birds followed a slow process of gradualism rather than a rapid process of punctuated equilibrium, with limited interruption by the KPg catastrophe”. They dated the common ancestor of Neoaves to 130 million years ago in the Early Cretaceous and their diversification to the Late Cretaceous, even though there exists not a single Cretaceous fossil of this group, which had already led the worlds foremost expert on the fossil record, Michael Benton (1999), to strongly reject such hypotheses as impossible.

Unsurprisingly, other experts are not convinced either, and said that “if the new study was right, there should be fossils of all major groups of living birds from well before the asteroid impact. But almost none have been found. The signal from the fossil record is not ambiguous” (Berv quoted in Zimmer 2024 for the New York Times). Likewise another comment in the prestigious journal Science said that “if major bird groups really did emerge before the asteroid impact, then why have almost no ancient bird fossils from that time period been found?” (Jacobs 2024). Spot on, but still we have a conflict between molecular evidence and the fossil record, which should agree if Darwinism is correct.

Conflicting Trees

However, the conflicts are by no means restricted to the timing of bird evolution. Even though Darwinism would predict that all different sources of data should point to one true tree of life, there is fundamental conflict in the various attempts to reconstruct the tree of birds in the 20th and 21st century. This conflict is visible in the results of three general methodological approaches (DNA-DNA-hybridization, morphological cladistics, and phylogenomics), as well as between morphological and molecular data and even between different sets of molecular genetic data.

DNA-DNA-Hybridization

In the 1970s and 1980s the American ornithologists and molecular biologists Charles Sibley and Jon Edward Ahlquist conducted DNA-DNA-hybridization studies of numerous species of modern birds (Sibley & Ahlquist 1990, Sibley 1994; also see Wikipedia). Their revolutionary great tree of 1,100 species of living birds was called “the tapestry” and introduced a major revision of avian classification.

Sibley and Ahlquist’s used the melting temperatures of hybridized strands of DNA of two species as proxy for their overall similarity. Their methods were strongly critiqued as flawed and phenetic (Houde 1987, Lanyon 1992, Harshman 1994, Marks 2011), but even John Harshman found that “the data in Sibley and Ahlquist (1990), properly analyzed, have a strong phylogenetic signal.” Nevertheless, only few of the supraordinal groups from their tree survived later studies, mainly the basal split between Galloanserae and Neoaves.

It is of course a cheap point to say today that the method of DNA-DNA-hybidization is obsolete and was just a short-lived and misguided fad in the early days of phylogenetics, but was it? Think about it. Instead of just comparing arbitrarily selected and arbitrarily defined morphological characters, or instead of just looking into selected sequenced genes, this method compared the overall similarity between complete genomes, the whole shebang of DNA. If anything, it is this very method, which should have recovered the echo of evolutionary history and common descent. That its results failed to agree with the more modern cladistic and phylogenomic studies is basically evidence for the total bankruptcy of Darwinism.

Hennigian Phylogenetics (Cladistics)

Another school of phylogenetic methodology that dominated the pre-phylogenomic era was Hennigian phylogenetic systematics, also known as cladistics. It was mainly based on data from comparative morphology and used only shared derived similarities (called synapomorphies) for the reconstruction of the most parsimonious tree topology. In bird phylogenetics the most prominent representative was certainly the American paleo-ornithologist Joel Cracraft, who was the curator for birds at the American Museum of Natural History in New York (Cracraft 1981, Cracraft & Clarke 2001, Cracraft et al. 2004). Even though Cracraft’s work was not without criticism even from fellow cladists (e.g., Olson 1982), it arguably represents the culmination of traditional cladistic studies on avian phylogeny. Other important cladistic studies based on bird morphology were contributed by Livezey & Zusi (2001, 2006, 2007) and many other works on particular neoavian subgroups. The results differed from each other, from Sibley & Ahlquist’s “tapestry,” and from more modern phylogenomic trees.

By the way: Cracraft (2001) also looked into the rocks vs clocks problem. He acknowledged that “the fossil record has been used to support the origin and radiation of modern birds (Neornithes) in Laurasia after the Cretaceous-Tertiary mass extinction event, whereas molecular clocks have suggested a Cretaceous origin for most avian orders.” He looked into the vicariance biogeography of birds as new source of data to resolve the problem and concluded “that neornithines arose in Gondwana prior to the Cretaceous-Tertiary extinction event.” However, this is fully consistent with the Big Bang hypothesis, which is about the radiation of Neoaves, not of Neornithes. After all, we do have a late Cretaceous fossil record of fowl (Galloanserae). Fifteen years later Claramunt & Cracraft (2015) clarified, as already mentioned above, “that the most recent common ancestor of modern birds inhabited South America around 95 million years ago, but it was not until the Cretaceous-Paleogene transition (66 million years ago) that Neornithes began to diversify rapidly around the world.”

Phylogenomics

In the 21st century the era of phylogenomics came to dominate the field of bird phylogenetics, which mainly uses maximum likelihood and Bayesian methods for tree reconstruction from DNA sequence data. Within a few years several very extensive phylogenomic studies appeared (e.g., Ericson et al. 2006, Hackett et al. 2008, Pratt et al. 2008, Pacheco et al. 2011, McCormack et al. 2013, Jarvis et al. 2014, Zhang et al. 2014, Prum et al. 2015, Reddy et al. 2017, Houde et al. 2019, Kimball et al. 2019, Braun & Kimball 2021, Kuhl et al. 2021, Yu et al. 2021, Wu et al. 2024; also see the Bird Phylogeny website), which not just conflicted with the previous phylogenies but also with each other (Mayr 2011, Matzke et al. 2012, Braun et al. 2019). This led some experts, such as Poe & Chubb (2004) and Suh (2016), to rather propose a hard polytomy (called “neoavian comb” by Cracraft et al. 2004) based on an explosive evolution, which brings us right back to the Big Bang of birds, because as Feduccia (2014) said: “our continued inability to produce a veracious phylogeny of higher avian taxa is likely related to a Paleogene explosive burst or ‘big bang’ evolution of bird and mammal evolution, resulting in short ordinal internodes.” The resolution of this polytomy has been called “the greatest current challenge of avian systematics” and “last frontier” which “is still elusive” (Pratt et al. 2008). The numerous phylogenomic studies only agree on a few higher clades that were called the “magnificent seven” by Reddy et al. (2017), which already indicates how rare such agreement is, but even those few clades conflict with the older trees based on DNA-DNA-hybridization and morphological cladistics (but see Mayr 2007, 2008 for a few exceptions).

Collapsing Trees

The above-described phylogenetic conflict and incongruent trees of birds exactly confirm a point that I recently made in two other Evolution News articles for Fossil Friday on the phylogeny of arachnids (Bechly 2023) and of insectivore mammals (Bechly 2024): When you look at the numerous published phylogenetic trees of a certain group of organisms and then calculate a strict consensus tree as a kind of common denominator, the result generally tends to be an unresolved polytomy, with basically only the pre-Darwinian Linnean classification of phyla, classes, orders, and families surviving this collapse of phylogenies. This is highly unexpected under Darwinian assumptions but very much resonates with the views of Darwin critics.

This is even implicitly and a bit cryptically acknowledged in mainstream findings like that of Gordon et al. (2021) who said:

Phylogenomic analyses have revolutionized the study of biodiversity, but they have revealed that estimated tree topologies can depend, at least in part, on the subset of the genome that is analyzed. For example, estimates of trees for avian orders differ if protein-coding or non-coding data are analyzed. The bird tree is a good study system because the historical signal for relationships among orders is very weak, which should permit subtle non-historical signals to be identified, while monophyly of orders is strongly corroborated, allowing identification of strong non-historical signals.

Maybe non-history (in the sense of uncommon descent) is the simple reason for a non-historical signal.

Braun et al. (2019) concluded in their review of the phylogenomic era in avian phylogenetics:

Reconstructing relationships among extant birds (Neornithes) has been one of the most difficult problems in phylogenetics, and, despite intensive effort, the avian tree of life remains (at least partially) unresolved. Thus far, the most difficult problem is the relationship among the orders of Neoaves, the major clade that includes the most (~95%) named bird species.

Explaining Away Conflicting Evidence

Of course, this is all reflecting the substantial conflicting data that do not align with an unambiguous nested hierarchy, contrary to the predictions of neo-Darwinism and the bold (and false) claims of its modern popularizers like Richard Dawkins. Something is way off, and mainstream evolutionary biologists simply ignore it and happily produce one conflicting tree after the other without ever questioning the underlying assumptions or even the general Darwinian paradigm. Conflicting evidence is explained away with inexpensive ad hoc hypotheses like convergence, ghost lineages, or incomplete lineage sorting. Torres & Van Tuinen (2013) said “rampant phylogenetic conflict at the ordinal level in modern birds can be explained by ordinal diversification taking place over a short time interval.” However, this is not the explanation of the problem but the description of the problem!

We can conclude that fossil and molecular data conflict in terms of the question when and how quickly modern birds originated, and molecular and morphological data conflict in terms of the reconstruction of the assumed bird tree of life. Why is there such a stark conflict, when Darwinism would naturally predict that different lines of evidence should converge towards one true evolutionary history of birds. Again, a quite obvious explanation could be that there just was no such history, or at least that totally different causal mechanism were at work.

Abrupt Origins

The most important take home message from this article is this: in spite of the new study by Wu et al. (2024), there is overwhelming evidence, recognized by the vast majority of mainstream experts, that there was an explosive diversification of modern birds (Neoaves) in the Lower Tertiary (Paleogene). There was an abrupt origin, a burst of biological creativity with a genuine Big Bang of modern birds, which is best explained by an infusion of new information from an intelligent agent outside the system. What do evolutionary biologists suggest instead? They say that the global collapse of forest ecosystems after the end-Cretaceous impact killed off all arboreal bird lineages and the remaining ground-dwelling ancestors of modern birds experienced a rapid diversification afterwards (Field et al. 2018). Yet another description of the problem, rather than an explanation, which seems to be a recurring theme in evolutionary biology.