Search This Blog

Friday, 1 March 2024

A theory of everything re:design detection?II

 Specified Complexity and a Tale of Ten Malibus


Yesterday in my series on specified complexity, I promised to show how all this works with an example of cars driving along a road. The example, illustrating what a given value of specified complexity means, is adapted from section 3.6 of the second edition of The Design Inference, from which I quote extensively. Suppose you witness ten brand new Chevy Malibus drive past you on a public road in immediate, uninter­rupted succession. The question that crosses your mind is this: Did this succession of ten brand new Chevy Malibus happen by chance?

Your first reaction might be to think that this event is a publicity stunt by a local Chevy dealership. In that case, the succession would be due to design rather than to chance. But you don’t want to jump to that conclusion too quickly. Perhaps it is just a lucky coincidence. But if so, how would you know? Perhaps the coincidence is so improbable that no one should expect to observe it as happening by chance. In that case, it’s not just unlikely that you would observe this coincidence by chance; it’s unlikely that anyone would. How, then, do you determine whether this succession of identical cars could reasonably have resulted by chance?

Obviously, you will need to know how many opportunities exist to observe this event. It’s estimated that in 2019 there were 1.4 billion motor vehicles on the road worldwide. That would include trucks, but to keep things simple let’s assume all of them are cars. Although these cars will appear on many different types of roads, some with traffic so sparse that ten cars in immediate succession would almost never happen, to say nothing of ten cars having the same late make and model, let’s give chance every opportunity to succeed by assuming that all these cars are arranged in one giant succession of 1.4 billion cars arranged bumper to bumper.

But it’s not enough to look at one static arrangement of all these 1.4 billion cars. Cars are in motion and continually rearranging themselves. Let’s therefore assume that the cars completely reshuffle themselves every minute, and that we might have the opportunity to see the succession of ten Malibus at any time across a hundred years. In that case, there would be no more than 74 quadrillion opportunities for ten brand new Chevy Malibus to line up in immediate, uninterrupted succession.

So, how improbable is this event given these 1.4 billion cars and their repeated reshuffling? To answer this question requires knowing how many makes and models of cars are on the road and their relative proportions (let’s leave aside how different makes are distributed geographically, which is also relevant, but introduces needless complications for the purpose of this illustration). If, per impossibile, all cars in the world were brand new Chevy Malibus, there would be no coincidence to explain. In that case, all 1.4 billion cars would be identical, and getting ten of them in a row would be an event of probability 1 regardless of reshuffling.

But Clearly, Nothing Like That Is the Case

Go to Cars.com, and using its car-locater widget you’ll find 30 popular makes and over 60 “other” makes of vehicles. Under the make of Chevrolet, there are over 80 models (not counting variations of models — there are five such variations under the model Malibu). Such numbers help to assess whether the event in question happened by chance. Clearly, the event is specified in that it answers to the short description “ten new Chevy Malibus in a row.” For the sake of argument, let’s assume that achieving that event by chance is going to be highly improbable given all the other cars on the road and given any reasonable assumptions about their chance distribution.

But there’s more work to do in this example to eliminate chance. No doubt, it would be remarkable to see ten new Chevy Malibus drive past you in immediate, uninterrupted succession. But what if you saw ten new red Chevy Malibus in a row drive past you? That would be even more striking now that they all also have the same color. Or what about simply ten new Chevies in a row? That would be less striking. But note how the description lengths covary with the probabilities: “ten new red Chevy Malibus in a row” has a longer description length than “ten new Chevy Malibus in a row,” but it corresponds to an event of smaller probability than the latter. Conversely, “ten new Chevies in a row” has shorter description length than “ten new Chevy Malibus in a row,” but it corresponds to an event of larger probability than the latter.

What we find in examples like this is a tradeoff between description length and probability of the event described (a tradeoff that specified complexity models). In a chance elimination argument, we want to see short description length combined with small probability (implying a larger value of specified complexity). But typically these play off against each other. “Ten new red Chevy Malibus in a row” corresponds to an event of smaller probability than “ten new Chevy Malibus in a row,” but its description length is slightly longer. Which event seems less readily ascribable to chance (or, we might say, worthier of a design inference)? A quick intuitive assess­ment suggests that the probability decrease outweighs the increase in description length, and so we’d be more inclined to eliminate chance if we saw ten new red Chevy Malibus in a row as opposed to ten of any color.

The lesson here is that probability and description length are in tension, so that as one goes up the other tends to go down, and that to eliminate chance both must be suitably low. We see this tension by contrasting “ten new Chevy Malibus in a row” with “ten new Chevies in a row,” and even more clearly with simply “ten Chevies in a row.” The latter has a shorter description length (lower description length) but also much higher probability. Intuitively, it is less worthy of a design inference because the increase in probability so outweighs the decrease in description length. Indeed, ten Chevies of any make and model in a row by chance doesn’t seem farfetched given the sheer number of Chevies on the road, certainly in the United States.

But There’s More

Why focus simply on Chevy Malibus? What if the make and model varied, so that the cars in succession were Honda Accords or Porsche Carreras or whatever? And what if the number of cars in succession varied, so it wasn’t just 10 but also 9 or 20 or whatever? Such questions underscore the different ways of specifying a succession of identical cars. Any such succession would have been salient if you witnessed it. Any such succession would constitute a specification if the description length were short enough. And any such succession could figure into a chance elimination argument if both the description length and the probability were low enough. A full-fledged chance-elimination argument in such circumstances would then factor in all relevant low-probability, low-description-length events, balancing them so that where one is more, the other is less.  

All of this can, as we by now realize, be recast in information-theoretic terms. Thus, a probability decrease corresponds to a Shannon information increase, and a description length increase corresponds to a Kolmogorov information increase. Specified complexity, as their difference, now has the following property (we assume, as turns out to be reasonable, that some fine points from theoretical computer science, such as the Kraft inequality, are approximately applicable): if the specified complexity of an event is greater than or equal to n bits, then the grand event consisting of all events with at least that level of specified complexity has probability less than or equal to 2^(–n). This is a powerful result and it provides a conceptually clean way to use specified complexity to eliminate chance and infer design. 

Essentially, what specified complexity does is consider an archer with a number of arrows in his quiver and a number of targets of varying size on a wall, and asks what is the probability that any one of these arrows will by chance land on one of these targets. The arrows in the quiver correspond to complexity, the targets to specifications. Raising the number 2 to the negative of specified complexity as an exponent then becomes the grand probability that any of these arrows will hit any of these targets by chance. 

Conclusion

Formally, the specified complexity of an event is the difference between its Shannon information and its Kolmogorov information. Informally, the specified complexity of an event is a combination of two properties, namely, that the event has small probability and that it has a description of short length. In the formal approach to specified complexity, we speak of algorithmic specified complexity. In the informal approach, we speak of intuitive specified complexity. But typically it will be clear from context which sense of the term “specified complexity” is intended.

In this series, we’ve defined and motivated algorithmic specified complexity. But we have not provided actual calculations of it. For calculations of algorithmic specified complexity as applied to real-world examples, I refer readers to sections 6.8 and 7.6 in the second edition of The Design Inference. Section 6.8 looks at general examples whereas section 7.6 looks at biological examples. In each of these sections, my co-author Winston Ewert and I examine examples where specified complexity is low, not leading to a design inference, and also where it is high, leading to a design inference.

For instance, in section 6.8 we take the so-called “Mars face,” a naturally occurring structure on Mars that looks like a face, and contrast it with the faces on Mount Rushmore. We argue that the specified complexity of the Mars face is too small to justify a design inference but that the specified complexity of the faces on Mount Rushmore is indeed large enough to justify a design inference.

Similarly, in section 7.6, we take the binding of proteins to ATP, as in the work of Anthony Keefe and Jack Szostak, and contrast it with the formation of protein folds in beta-lactamase, as in the work of Douglas Axe. We argue that the specified complexity of random ATP binding is close to 0. In fact, we calculate a negative value of the specified complexity, namely, –4. On the other hand, for the evolvability of a beta-lactamase fold, we calculate a specified complexity of 215, which corresponds to a probability of 2^(–215), or roughly a probability of 1 in 10^65. 

With all these numbers, we estimate a Shannon information and a Kolmogorov information and then calculate a difference. The validity of these estimates and the degree to which they can be refined can be disputed. But the underlying formalism of specified complexity is rock solid. The details of that formalism and its applications go beyond a series titled “Specified Complexity Made Simple.” Those details can all be found in the second edition of The design inference

The universe keeps trolling the script?

 

Fake it till you make it?

 Fossil Friday: Piltdown Lizard Was Too Good to Check


This Fossil Friday features Tridentinosaurus antiquus, which was discovered in 1931 and described by Leonardi (1959) from the Early Permian (ca. 280 million years old) sandstone of the Italian Alps. The 10-foot-long fossil animal looks like a dark imprint of an Anolis lizard. It was attributed by Dalla Veccia (1997) to the extinct Protorosauria (= Prolacertiformes) and considered to be “one of the oldest fossil reptiles and one of the very few skeletal specimens with evidence of soft tissue preservation” (Rossi et al. 2024), interpreted as carbonized skin showing the whole body outline like a photograph. Only the bones of the hind limbs were clearly visible.

The 90-year-old fossil find remained unique, as nothing similar was ever discovered again in the Permian of the Italian Alps (Starr 2024). This should have raised some red flags. However, why question a fossil that was “thought to be an important specimen for understanding early reptile evolution” (University College Cork 2024)? As journalists would say, it was too good to check. Instead the find was “celebrated in articles and books but never studied in detail” (University College Cork 2024).

Bombshell and Headlines

Now a new study (Rossi et al. 2024) of the famous fossil has turned out to be a bombshell, making global media headlines (University College Cork 2024). The scientists used sophisticated methods including ultraviolet light photography, 3D surface modeling, scanning electronic microscopy, and Fourier transformed infrared spectroscopy to analyze the apparent soft tissue of the fossil reptile. To their great surprise they discovered that “the material forming the body outline is not fossilized soft tissues but a manufactured pigment indicating that the body outline is a forgery,” which of course also throws into doubt the “validity of this enigmatic taxon.”

The study concludes that “The putative soft tissues of T. antiquus, one of the oldest known reptiles from the Alps, are fake and thus this specimen is not an exceptionally preserved fossil. Despite this, the poorly preserved long bones of the hindlimbs seem to be genuine.” But in the absence of novel information about the preserved skeleton, the authors “suggest caution in using T. antiquus in phylogenetic studies.”

Bombshell and Headlines

Now a new study (Rossi et al. 2024) of the famous fossil has turned out to be a bombshell, making global media headlines (University College Cork 2024). The scientists used sophisticated methods including ultraviolet light photography, 3D surface modeling, scanning electronic microscopy, and Fourier transformed infrared spectroscopy to analyze the apparent soft tissue of the fossil reptile. To their great surprise they discovered that “the material forming the body outline is not fossilized soft tissues but a manufactured pigment indicating that the body outline is a forgery,” which of course also throws into doubt the “validity of this enigmatic taxon.”

The study concludes that “The putative soft tissues of T. antiquus, one of the oldest known reptiles from the Alps, are fake and thus this specimen is not an exceptionally preserved fossil. Despite this, the poorly preserved long bones of the hindlimbs seem to be genuine.” But in the absence of novel information about the preserved skeleton, the authors “suggest caution in using T. antiquus in phylogenetic studies.”

Who Did It, and Why?

It is not known who perpetrated the forgery or why, but probably it was just a way to embellish the poor remains of the leg bones with some fancy painting (Starr 2024), coating it with varnish as a protective layer to hide the forgery from easy discovery (University College Cork 2024).

Italian paleontologist Valentina Rossi, the lead scientist of the study that uncovered the forgery, said in an article at The Conversation (Rossi 2024a) that “fake fossils are among us, passing almost undetected under the eye of experts all over the world. This is a serious problem — counterfeited specimens can mislead palaeontologists into studying an ancient past that never existed.” The reprinted article in Scientific American (Rossi 2024b) even admits in the subtitle, “Paleontology is rife with fake fossils that are made to cash in on illegal trade but end up interfering with science.” Let that sink in, and remember it when Darwinists try to ridicule Darwin critics, who bring up forgeries such as Piltdown Man or Archaeoraptor. Don’t let them get away with (despite knowing better) claiming that such forgeries are not a real problem in evolutionary biology.

Therefore, in loving memory of the Piltdown Man forgery, and the Piltdown Fly (Bechly 2022), we may in the future call this specimen the Piltdown Lizard.

A theory of everything re:design detection?

 Specified Complexity as a Unified Information Measure


With the publication of the first edition of my book The Design Inference and its sequel No Free Lunch, elucidating the connection between design inferences and information theory became increasingly urgent. That there was a connection was clear. The first edition of The Design Inference sketched, in the epilogue, how the relation between specifications and small probability (complex) events mirrored the transmission of messages along a communication channel from sender to receiver. Moreover, in No Free Lunch, both Shannon and Kolmogorov information were explicitly cited in connection with specified complexity — which is the subject of this series.

But even though specified complexity as characterized back then employed informational ideas, it did not constitute a clearly defined information measure. Specified complexity seemed like a kludge of ideas from logic, statistics, and information. Jay Richards, guest-editing a special issue of Philosophia Christi, asked me to clarify the connection between specified complexity and information theory. In response, I wrote an article titled “Specification: The Pattern That Signifies Intelligence,” which appeared in that journal in 2005

A Single Measure

In that article, I defined specified complexity as a single measure that combined under one roof all the key elements of the design inference, notably, small probability, specification, probabilistic resources, and universal probability bounds. Essentially, in the measure I articulated there, I attempted to encapsulate the entire design inferential methodology within a single mathematical expression. 

In retrospect, all the key pieces for what is now the fully developed informational account of specified complexity were there in that article. But my treatment of specified complexity there left substantial room for improvement. I used a counting measure to enumerate all the descriptions of a given length or shorter. I then placed this measure under a negative logarithm. This gave the equivalent of Kolmogorov information, suitably generalized to minimal description length. But because my approach was so focused on encapsulating the design-inferential methodology, the roles of Shannon and Kolmogorov information in its definition of specified complexity were muddied. 

My 2005 specified complexity paper fell stillborn from the press, and justly so given its lack of clarity. Eight years later, Winston Ewert, working with Robert Marks and me at the Evolutionary Informatics Lab, independently formulated specified complexity as a unified measure. It was essentially the same measure as in my 2005 article, but Ewert clearly articulated the place of both Shannon and Kolmogorov information in the definition of specified complexity. Ewert, along with Marks and me as co-authors, published this work under the title “Algorithmic Specified Complexity,” and then published subsequent applications of this work (see the Evolutionary Informatics Lab publications page). 

With Ewert’s lead, specified complexity, as an information measure, became the difference between Shannon information and Kolmogorov information. In symbols, the specified complexity SC for an event E was thus defined as SC(E) = I(E) – K(E). The term I(E) in this equation is just, as we saw in my last article, Shannon information, namely, I(E) = –log(P(E)), where P(E) is the probability of E with respect to some underlying relevant chance hypothesis. The term K(E) in this equation, in line with the last article, is a slight generalization of Kolmogorov information, in which for an event E, K(E) assigns the length, in bits, of the shortest description that precisely identifies E. Underlying this generalization of Kolmogorov information is a binary, prefix-free, Turing complete language that maps descriptions from the language to the events they identify. 

Not Merely a Kludge

There’s a lot packed into this last paragraph, so explicating it all is not going to be helpful in an article titled “Specified Complexity Made Simple.” For the details, see Chapter 6 of the second edition of The Design Inference. Still, it’s worth highlighting a few key points to show that SC, so defined, makes good sense as a unified information measure and is not merely a kludge of Shannon and Kolmogorov information. 

What brings Shannon and Kolmogorov information together as a coherent whole in this definition of specified complexity is event-description duality. Events (and the objects and structures they produce) occur in the world. Descriptions of events occur in language. Thus, corresponding to an event E are descriptions D that identify E. For instance, the event of getting a royal flush in the suit of hearts corresponds to the description “royal flush in the suit of hearts.” Such descriptions are, of course, never unique. The same event can always be described in multiple ways. Thus, this event could also be described as “a five-card poker hand with an ace of hearts, a king of hearts, a queen of hearts, a jack of hearts, and a ten of hearts.” Yet this description is quite a bit longer than the other. 

Given event-description duality, it follows that: (1) an event E with a probability P(E) has Shannon information I(E), measured in bits; moreover, (2) given a binary language (one expressed in bits — and all languages can be expressed in bits), for any description D that identifies E, the number of bits making up D, which in the last section we defined as |D|, will be no less than the Kolmogorov information of E (which measures in bits the shortest description that identifies E). Thus, because K(E) ≤ |D|, it follows that SC(E) = I(E) – K(E) ≥ I(E) – |D|. 

The most important take away here is that specified complexity makes Shannon information and Kolmogorov information commensurable. In particular, specified complexity takes the bits associated with an event’s probability and subtracts from it the bits associated with their minimum description length. Moreover, in estimating K(E), we then use I(E) – |D| to form a lower bound for specified complexity. It follows that specified complexity comes in degrees and could take on negative values. In practice, however, we’ll say an event exhibits specified complexity if it is positive and large (with what it means to be large depending on the relevant probabilistic resources). 

The Kraft Inequality

There’s a final fact that makes specified complexity a natural information measure and not just an arbitrary combination of Shannon and Kolmogorov information, and that’s the Kraft inequality. To apply the Kraft inequality of specified complexity here depends on the language that maps descriptions to events being prefix-free. Prefix-free languages help to ensure disambiguation, so that one description is not the start of another description. This is not an onerous condition, and even though it does not hold for natural languages, transforming natural languages into prefix-free languages leads to negligible increases in description length (again, see Chapter 6 of the second edition of The Design Inference). 

What the Kraft inequality does for the specified complexity of an event E is guarantee that all events having the same or greater specified complexity, when considered jointly as one grand union, nonetheless have probability less than or equal to 2 raised to the negative power of the specified complexity. In other words, the probability of the union of all events F with specified complexity no less than that of E (i.e., SC(F) ≥ SC(E)), will have probability less than or equal to 2^(–SC(E)). This result, so stated, may not seem to belong in a series of articles attempting to make specified complexity simple. But it is a big mathematical result, and it connects specified complexity to a probability bound that’s crucial for drawing design inferences. To illustrate how this all works, let’s turn next to an example of cars driving along a road.

There is nothing simple about this beginning? II

 Brian Miller: Rarity and Isolation of Proteins in Sequence Space


Was the universe designed to evolve through natural processes? In a recent book, theologian Rope Kojonen has argued that evolutionary mechanisms work in harmony with intelligent design to produce the diversity of life we see on Earth. But can these fundamentally different processes really work together? On a new episode of ID the Future, host Casey Luskin speaks with physicist Dr. Brian Miller to explore why Kojonen’s theory fails on scientific grounds.

In this episode, Dr. Miller delves into the rarity and isolation of proteins in sequence space. Kojonen takes mainstream evolutionary mechanisms for granted, positing that the laws of nature are specially designed to allow every protein in nature to evolve through standard natural processes. But Miller shows that the limits of protein evolution are very real and very problematic for Kojonen’s model. He explains in detail multiple lines of evidence that show how unlikely it is that protein sequences occur naturally or by chance in sequence space. Miller reports on research showing that the probability of a protein continuing to work after each mutation drops precipitously. He also explains that even the most similar proteins are about 80 percent different from each other. It all adds up to a headache for evolutionary theory, and the headache doesn’t go away when you marry mainstream evolutionary theory with intelligent design.

Download the podcast or listen to it here

We know it when we see it?

 Intuitive Specified Complexity: A User-Friendly Account


Even though this series is titled “Specified Complexity Made Simple,” there’s a limit to how much the concept of specified complexity may be simplified before it can no longer be adequately defined or explained. Accordingly, specified complexity, even when made simple, will still require the introduction of some basic mathematics, such as exponents and logarithms, as well as an informal discussion of information theory, especially Shannon and Kolmogorov information. I’ll get to that in the subsequent posts. 

At this early stage in the discussion, however, it seems wise to lay out specified complexity in a convenient non-technical way. That way, readers lacking mathematical and technical facility will still be able to grasp the gist of specified complexity. Here, I’ll present an intuitively accessible account of specified complexity. Just as all English speakers are familiar with the concept of prose even if they’ve never thought about how it differs from poetry, so too we are all familiar with specified complexity even if we haven’t carefully defined it or provided a precise formal mathematical account of it. 

In this post I’ll present a user-friendly account of specified complexity by means of intuitively compelling examples. Even though non-technical readers may be inclined to skip the rest of this series, I would nonetheless encourage all readers to dip into the subsequent posts, if only to persuade themselves that specified complexity has a sound rigorous basis to back up its underlying intuition. 

To Get the Ball Rolling…

Let’s consider an example by YouTube personality Dave Farina, known popularly as “Professor Dave.” In arguing against the use of small probability arguments to challenge Darwinian evolutionary theory, Farina offers the following example:

Let’s say 10 people are having a get-together, and they are curious as to what everyone’s birthday is. They go down the line. One person says June 13th, another says November 21st, and so forth. Each of them have a 1 in 365 chance of having that particular birthday. So, what is the probability that those 10 people in that room would have those 10 birthdays? Well, it’s 1 in 365 to the 10th power, or 1 in 4.2 times 10 to the 25, which is 42 trillion trillion. The odds are unthinkable, and yet there they are sitting in that room. So how can this be? Well, everyone has to have a birthday.

Farina’s use of the term “unthinkable” brings to mind Vizzini in The Princess Bride. Vizzini keeps uttering the word “inconceivable” in reaction to a man in black (Westley) steadily gaining ground on him and his henchmen. Finally, his fellow henchman Inigo Montoya remarks, “You keep using that word — I do not think it means what you think it means.”

Similarly, in contrast to Farina, an improbability of 1 in 42 trillion trillion is in fact quite thinkable. Right now you can do even better than this level of improbability. Get out a fair coin and toss it 100 times. That’ll take you a few minutes. You’ll witness an event unique in the history of coin tossing and one having a probability of 1 in 10 to the 30, or 1 in a million trillion trillion. 

The reason Farina’s improbability is quite thinkable is that the event to which it is tied is unspecified. As he puts it, “One person says June 13th, another says November 21st, and so forth.” The “and so forth” here is a giveaway that the event is unspecified. 

But now consider a variant of Farina’s example: Imagine that each of his ten people confirmed that his or her birthday was January 1. The probability would in this case again be 1 in 42 trillion trillion. But what’s different now is that the event is specified. How is it specified? It is specified in virtue of having a very short description, namely, “Everyone here was born New Year’s Day.” 

Nothing Surprising Here

The complexity in specified complexity refers to probability: the greater the complexity, the smaller the probability. There is a precise information-theoretic basis for this connection between probability and complexity that we’ll examine in the next post. Accordingly, because the joint probability of any ten birthdays is quite low, their complexity will be quite high. 

For things to get interesting with birthdays, complexity needs to be combined with specification. A specification is a salient pattern that we should not expect a highly complex event to match simply by chance. Clearly, a large group of people that all share the same birthday did not come together by chance. But what exactly is it that makes a pattern salient so that, in the presence of complexity, it becomes an instance of specified complexity and thereby defeats chance? 

That’s the whole point of specified complexity. Sheer complexity, as Farina’s example shows, cannot defeat chance. So too, the absence of complexity cannot defeat chance. For instance, if we learn that a single individual has a birthday on January 1, we wouldn’t regard anything as amiss or afoul. That event is simple, not complex, in the sense of probability. Leaving aside leap years and seasonal effects on birth rates, 1 out 365 people will on average have a birthday on January 1. With a worldwide population of 8 billion people, many people will have that birthday. 

Not by Chance

But a group of exactly 10 people all in the same room all having a birthday of January 1 is a different matter. We would not ascribe such a coincidence to chance. But why? Because the event is not just complex but also specified. And what makes a complex event also specified — or conforming to a specification — is that it has a short description. In fact, we define specifications as patterns with short descriptions.

Such a definition may seem counterintuitive, but it actually makes good sense of how we eliminate chance in practice. The fact is, any event (and by extension any object or structure produced by an event) is describable if we allow ourselves a long enough description. Any event, however improbable, can therefore be described. But most improbable events can’t be described simply. Improbable events with simple descriptions draw our attention and prod us to look for explanations other than chance.

Take Mount Rushmore. It could be described in detail as follows: for each cubic micrometer in a large cube that encloses the entire monument, register whether it contains rock or is empty of rock (treating partially filled cubic micrometers, let us stipulate, as empty). Mount Rushmore can be enclosed in a cube of under 50,000 cubic meters. Moreover, each cubic meter contains a million trillion micrometers. Accordingly, 50 billion trillion filled-or-empty cells could describe Mount Rushmore in detail. Thinking of each filled-or-empty cell as a bit then yields 50 billion trillion bits of information. That’s more information than contained in the entire World Wide Web (there are currently 2 billion websites globally). 

But of course, nobody attempts to describe Mount Rushmore that way. Instead, we describe it succinctly as “a giant rock formation that depicts the U.S. Presidents George Washington, Thomas Jefferson, Abraham Lincoln, and Theodore Roosevelt.” That’s a short description. At the same time, any rock formation the size of Mount Rushmore will be highly improbable or complex. Mount Rushmore is therefore both complex and specified. That’s why, even if we knew nothing about the history of Mount Rushmore’s construction, we would refuse to attribute it to the forces of chance (such as wind and erosion) and instead attribute it to design.

Take the Game of Poker

Consider a few more examples in this vein. There are 2,598,960 distinct possible poker hands, and so the probability of any poker hand is 1/2,598,960. Consider now two short descriptions, namely, “royal flush” and “single pair.” These descriptions have roughly the same description length. Yet there are only 4 ways of getting a royal flush and 1,098,240 ways of getting a single pair. This means the probability of getting a royal flush is 4/2,598,960 = .00000154 but the probability of getting a single pair is 1,098,240/2,598,960 = .423. A royal flush is therefore much more improbable than a single pair.

Suppose now that you are playing a game of poker and you come across these two hands, namely, a royal flush and a single pair. Which are you more apt to attribute to chance? Which are you more apt to attribute to cheating, and therefore to design? Clearly, a single pair would, by itself, not cause you to question chance. It is specified in virtue of its short description. But because it is highly probable, and therefore not complex, it would not count as an instance of specified complexity. 

Witnessing a royal flush, however, would elicit suspicion, if not an outright accusation of cheating (and therefore of design). Of course, given the sheer amount of poker played throughout the world, royal flushes will now and then appear by chance. But what raises suspicion that a given instance of a royal flush may not be the result of chance is its short description (a property it shares with “single pair”) combined with its complexity/improbability (a property it does not share with “single pair”). 

Let’s consider one further example, which seems to have become a favorite among readers of the recently released second edition of The Design Inference. In the chapter on specification, my co-author Winston Ewert and I consider a famous scene in the film The Empire Strikes Back, which we then contrast with a similar scene from another film that parodies it. Quoting from the chapter:

Darth Vader tells Luke Skywalker, “No, I am your father,” revealing himself to be Luke’s father. This is a short description of their relationship, and the relationship is surprising, at least in part because the relationship can be so briefly described. In contrast, consider the following line uttered by Dark Helmet to Lone Starr in Spaceballs, the Mel Brooks parody of Star Wars: “I am your father’s brother’s nephew’s cousin’s former room­mate.” The point of the joke is that the relationship is so compli­cated and contrived, and requires such a long description, that it evokes no suspicion and calls for no special explanation. With everybody on the planet connected by no more than “six degrees of separation,” some long description like this is bound to identify anyone.

In a universe of countless people, Darth Vader meeting Luke Skywalker is highly improbable or complex. Moreover, their relation of father to son, by being briefly described, is also specified. Their meeting therefore exhibits specified complexity and cannot be ascribed to chance. Dark Helmet meeting Lone Starr may likewise be highly improbable or complex. But given the convoluted description of their past relationship, their meeting represents an instance of unspecified complexity. If their meeting is due to design, it is for reasons other than their past relationship.

How Short Is Short Enough?

Before we move to a more formal treatment of specified complexity, we are well to ask how short is short enough for a description to count as a specification. How short should a description be so that combined with complexity it produces specified complexity? As it is, in the formal treatment of specified complexity, complexity and description length are both converted to bits, and then specified complexity can be defined as the difference of bits (the bits denoting complexity minus the bits denoting specification). 

When specified complexity is applied informally, however, we may calculate a probability (or associated complexity) but we usually don’t calculate a description length. Rather, as with the Star Wars/Spaceballs example, we make an intuitive judgment that one description is short and natural, the other long and contrived. Such intuitive judgments have, as we will see, a formal underpinning, but in practice we let ourselves be guided by intuitive specified complexity, treating it as a convincing way to distinguish merely improbable events from those that require further scrutiny.  

The other prisoner of conscience? III

 

James Tour wants to see a manager re:OOL Research

 Apparently there is a problem with his prebiotic soup.

There is information and then there is Information?

 Shannon and Kolmogorov Information


The first edition of my book The Design Inference as well as its sequel, No Free Lunch, set the stage for defining a precise information-theoretic measure of specified complexity — which is the subject of this series. There was, however, still more work to be done to clarify the concept. In both these books, specified complexity was treated as a combination of improbability or complexity on the one hand and specification on the other. 

As presented back then, it was an oil-and-vinegar combination, with complexity and specification treated as two different types of things exhibiting no clear commonality. Neither book therefore formulated specified complexity as a unified information measure. Still, the key ideas for such a measure were in those earlier books. Here, I review those key information-theoretic ideas. In the next section, I’ll join them into a unified whole.

Let’s Start with Complexity

As noted earlier, there’s a deep connection between probability and complexity. This connection is made clear in Shannon’s theory of information. In this theory, probabilities are converted to bits. To see how this works, consider tossing a coin 100 times, which yields an event of probability 1 in 2^100 (the caret symbol here denotes exponentiation). But that number also corresponds to 100 bits of information since it takes 100 bits to characterize any sequence of 100 coin tosses (think of 1 standing for heads and 0 for tails). 

In general, any probability p corresponds to –log(p) bits of information, where the logarithm here and elsewhere in this article is to the base 2 (as needed to convert probabilities to bits). Think of a logarithm as an exponent: it’s the exponent to which you need to raise the base (here always 2) in order to get the number to which the logarithmic function is applied. Thus, for instance, a probability of p = 1/10 corresponds to an information measure of –log(1/10) ≈ 3.322 bits (or equivalently, 2^(–3.322) ≈ 1/10). Such fractional bits allow for a precise correspondence between probability and information measures.

The complexity in specified complexity is therefore Shannon information. Claude Shannon (1916–2001, pictured above) introduced this idea of information in the 1940s to understand signal transmissions (mainly of bits, but also for other character sequences) across communication channels. The longer the sequence of bits transmitted, the greater the information and therefore its complexity. 

Because of noise along any communication channel, the greater the complexity of a signal, the greater the chance of its distortion and thus the greater the need for suitable coding and error correction in transmitting the signal. So the complexity of the bit string being transmitted became an important idea within Shannon’s theory. 

Shannon’s information measure is readily extended to any event E with a probability P(E). We then define the Shannon information of E as –log(P(E)) = I(E). Note that the minus sign is there to ensure that as the probability of E goes down, the information associated with E goes up. This is as it should be. Information is invariably associated with the narrowing of possibilities. The more those possibilities are narrowed, the more the probabilities associated with those probabilities decrease, but correspondingly the more the information associated with those narrowing possibilities increases. 

For instance, consider a sequence of ten tosses of a fair coin and consider two events, E and F. Let E denote the event where the first five of these ten tosses all land heads but where we don’t know the remaining tosses. Let F denote the event where all ten tosses land heads. Clearly, F narrows down the range of possibilities for these ten tosses more than E does. Because E is only based on the first five tosses, its probability is P(E) = 2^(–5) = 1/(2^5) = 1/32. On the other hand, because F is based on all ten tosses, its probability is P(F) = 2^(–10) = 1/(2^10) = 1/1,024. In this case, the Shannon information associated with E and F is respectively I(E) = 5 bits and I(F) = 10 bits. 

We Also Need Kolmogorov Complexity

Shannon information, however, is not enough to understand specified complexity. For that, we also need Kolmogorov information, or what is also called Kolmogorov complexity. Andrei Kolmogorov (1903–1987) was the greatest probabilist of the 20th century. In the 1960s he tried to make sense of what it means for a sequence of numbers to be random. To keep things simple, and without loss of generality, we’ll focus on sequences of bits (since any numbers or characters can be represented by combinations of bits). Note that we made the same simplifying assumption for Shannon information.

The problem Kolmogorov faced was that any sequence of bits treated as the result of tossing a fair coin was equally probable. For instance, any sequence of 100 coin tosses would have probability 1/(2^100), or 100 bits of Shannon information. And yet there seemed to Kolmogorov a vast difference between the following two sequences of 100 coin tosses (letting 0 denote tails and 1 denote heads):

0000000000000000000000000
0000000000000000000000000
0000000000000000000000000
0000000000000000000000000

and

1001101111101100100010011
0001010001010010101110001
0101100000101011000100110
1100110100011000000110001

The first just repeats the same coin toss 100 times. It appears anything but random. The second, on the other hand, exhibits no salient pattern and so appears random (I got it just now from an online random bit generator). But what do we mean by random here? Is it that the one sequence is the sort we should expect to see from coin tossing but the other isn’t? But in that case, probabilities tell us nothing about how to distinguish the two sequences because they both have the same small probability of occurring. 

Ideas in the Air

Kolmogorov’s brilliant stroke was to understand the randomness of these sequences not probabilistically but computationally. Interestingly, the ideas animating Kolmogorov were in the air at that time in the mid 1960s. Thus, both Ray Solomonoff and Gregory Chaitin (then only a teenager) also came up with the same idea. Perhaps unfairly, Kolmogorov gets the lion’s share of the credit for characterizing randomness computationally. Most information-theory books (see, for instance, Cover and Thomas’s The Elements of Information Theory), in discussing this approach to randomness, will therefore focus on Kolmogorov and put it under what is called Algorithmic Information Theory (AIT). 

Briefly, Kolmogorov’s approach to randomness is to say that a sequence of bits is random to the degree that it has no short computer program that generates it. Thus, with the first sequence above, it is non-random since it has a very short program that generates it, such as a program that simply says “repeat ‘0’ 100 times.” On the other hand, there is no short program (so far as we can tell) that generates the second sequence. 

It is a combinatorial fact (i.e., a fact about the mathematics of counting or enumerating possibilities) that the vast majority of bit sequences cannot be characterized by any program shorter than the sequence itself. Obviously, any sequence can be characterized by a program that simply incorporates the entire sequence and then simply regurgitates it. But such a program fails to compress the sequence. The non-random sequences, by having programs shorter than the sequences themselves, are thus those that are compressible. The first of the sequences above is compressible. The second, for all we know, isn’t.

Kolmogorov’s information (also known as Kolmogorov complexity) is a computational theory because it focuses on identifying the shortest program that generates a given bit-string. Yet there is an irony here: it is rarely possible to say with certainly that a given bit string is truly random in the sense of having no compressible program. From combinatorics, with its mathematical counting principles, we know that the vast majority of bit sequences must be random in Kolmogorov’s sense. That’s because the number of short programs is very limited and can only generate very few longer sequences. Most longer sequences will require longer programs. 

Our Common Experience

But if for an arbitrary bit sequence D we define K(D) as the length of the shortest program that generates D, it turns out that there is no computer program that calculates K(D). Simply put, the function K is non-computable. This fact from theoretical computer science matches up with our common experience that something may seem random for a time, and yet we can never be sure that it is random because we might discover a pattern clearly showing that the thing in fact isn’t random (think of an illusion that looks like a “random” inkblot only to reveal a human face on closer inspection). 

Yet even though K is non-computable, in practice it is a useful measure, especially for understanding non-randomness. Because of its non-computability, K doesn’t help us to identify particular non-compressible sequences, these being the random sequences. Even with K as a well-defined mathematical function, we can’t in most cases determine precise values for it. Nevertheless, K does help us with the compressible sequences, in which case we may be able to estimate it even if we can’t exactly calculate it. 

What typically happens in such cases is that we find a salient pattern in a sequence, which then enables us to show that it is compressible. To that end, we need a measure of the length of bit sequences as such. Thus, for any bit sequence D, we define |D| as its length (total number of bits). Because any sequence can be defined in terms of itself, |D| forms an upper bound on Kolmogorov complexity. Suppose now that through insight or ingenuity, we find a program that substantially compresses D. The length of that program, call it n, will then be considerably less than |D| — in other words, n < |D|. 

Although this program length n will be much shorter than D, it’s typically not possible to show that this program of length n is the very shortest program that generates D. But that’s okay. Given such a program of length n, we know that K(D) cannot be greater than n because K(D) measures the very shortest such program. Thus, by finding some short program of length n, we’ll know that K(D) ≤ n < |D|. In practice, it’s enough to come up with a short program of length n that’s substantially less than |D|. The number n will then form an upper bound for K(D). In practice, we use n as an estimate for K(D). Such an estimate, as we’ll see, ends up in applications being a conservative estimate of Kolmogorov complexity. 

1Corinthians Ch.4:7:The Watchtower society's condensed commentary.

 



Wol.JW.org


Friday, March 1

Why do you boast?​—1 Cor. 4:7.


The apostle Peter urged his brothers to use whatever gifts and talents they had to build up their fellow believers. Peter wrote: “To the extent that each one has received a gift, use it in ministering to one another as fine stewards of God’s undeserved kindness.” (1 Pet. 4:10) We should not hold back from using our gifts to the fullest for fear that others may become jealous or get discouraged. But we must be careful that we do not boast about them. (1 Cor. 4:6) Let us remember that any natural abilities we may have are gifts from God. We should use those gifts to build up the congregation, not to promote ourselves. (Phil. 2:3) When we use our energy and abilities to do God’s will, we will have cause for rejoicing​—not because we are outdoing others or proving ourselves superior to them, but because we are using our gifts to bring praise to JEHOVAH. 

The Origin of Life remains darwinism's achilles heel?

 On the Origin of Life, a Measure of Intelligent Design’s Impact on Mainstream Science


Don’t let anyone tell you that intelligent design isn’t having an impact on the way mainstream scientists are thinking about problems like the origin of life (OOL). David Coppedge points out the “devastating assessment” of OOL that was just published in Nature, the world’s most prestigious science journal. The authors are Nick Lane and Joana Xavier. The latter is a chemist at Imperial College London. As Coppedge notes, she’s been frank in comments about intelligent design and specifically Stephen Meyer’s Signature in the cell.

“One of the Best Books I’ve Read”

From a 2022 conversation with Perry Marshall:

But about intelligent design, let me tell you, Perry, I read Signature in the Cell by Stephen Meyer…And I must tell you, I found it one of the best books I’ve read, in terms of really putting the finger on the questions. What I didn’t like was the final answer, of course. But I actually tell everyone I can, “Listen, read that book. Let’s not put intelligent design on a spike and burn it. Let’s understand what they’re saying and engage.” It’s a really good book that really exposes a lot of the questions that people try to sweep under the carpet….I think we must have a more naturalistic answer to these processes. There must be. Otherwise, I’ll be out of a job.

That is a remarkable statement. Paul Nelson first Noted it at Evolution News. 

Under the Carpet

Dr. Xavier rejects ID, which is fair enough, but recommends an ID book by Dr. Meyer to “everyone I can” because “it really exposes a lot of the questions that people try to sweep under the carpet.” In the book, Meyer finds that, in addressing the origin-of-life puzzle, all current materialist solutions fail. He has a politer way of saying what chemist James Tour does on the same subject.

So that’s September 2022. Now a year and a half later, Xavier is back in the pages of Nature exposing weaknesses in the OOL field as currently constituted. She still holds out for a “more naturalistic answer.” But do you think, in writing about those “questions that people try to sweep under the carpet,” she didn’t have Meyer’s book in the back of her mind? I’m no mind reader, but to me, the question seems self-answering.


Getting fraud down to a science?IV

 

Tuesday, 27 February 2024

The odd couple? II

 Can Evolution and Intelligent Design Work Together in Harmony?


Or is that wishful thinking? On a new episode of ID the Future, host Casey Luskin concludes his conversation with philosopher Stephen Dilley about a recent proposal to marry mainstream evolutionary theory with a case for intelligent design. Dr. Dilley is lead author of a comprehensive critique of Kojonen’s model co-authored with Luskin, Brian Miller, and Emily Reeves and published in the journal Religions.

In the second half of their discussion, Luskin and Dilley explain key scientific problems with Kojonen’s theistic evolutionary model. First up is Kojonen’s acceptance of both convergent evolution and common ancestry, two methods used by evolutionary biologists to explain common design features among different organisms. But if the design can be explained through natural processes, there is little need to invoke intelligent design. After all, the whole point of mainstream evolutionary theory is to render any need for design superfluous.

Dr. Dilley also explains why Kojonen’s model contradicts our natural intuition to detect design. If we look at a hummingbird under Kojonen’s proposal, we are still required to see unguided natural processes at work, the appearance of design without actual intelligent design. Yet we are also supposed to acknowledge that an intelligent designer front-loaded the evolutionary process with the creative power it needs to produce the hummingbird. So is it intelligently designed or isn’t it? The theist on the street is left scratching his or her head.

Download the podcast or listen to it here

Monday, 26 February 2024

On the Syriac Peshitta.

 The Syriac Peshitta—A Window on the World of Early Bible Translations


For nine days in 1892, the twin sisters Agnes Smith Lewis and Margaret Dunlop Gibson journeyed by camel through the desert to St. Catherine’s Monastery at the foot of Mount Sinai. Why would these two women in their late 40’s undertake such a journey at a time when travel in what was called the Orient was so dangerous? The answer may help strengthen your belief in the accuracy of the Bible.

JUST before returning to heaven, Jesus commissioned his disciples to bear witness about him “in Jerusalem, in all Judea and Samaria, and to the most distant part of the earth.” (Acts 1:8) This the disciples did with zeal and courage. Their ministry in Jerusalem, however, soon stirred up strong opposition, resulting in the martyrdom of Stephen. Many of Jesus’ disciples found refuge in Antioch, Syria, one of the largest cities in the Roman Empire, some 350 miles (550 km) north of Jerusalem.—Acts 11:19.

In Antioch, the disciples continued to preach “the good news” about Jesus, and many non-Jews became believers. (Acts 11:20, 21) Though Greek was the common language within the walls of Antioch, outside its gates and in the province, the language of the people was Syriac.

THE GOOD NEWS TRANSLATED INTO SYRIAC

As the number of Syriac-speaking Christians increased in the second century, there arose a need for the good news to be translated into their tongue. Thus, it appears that Syriac, not Latin, was the first vernacular into which parts of the Christian Greek Scriptures were translated.

 By about 170 C.E., the Syrian writer Tatian (c. 120-173 C.E.) combined the four canonical Gospels and produced, in Greek or Syriac, the work commonly called the Diatessaron, a Greek word meaning “through [the] four [Gospels].” Later, Ephraem the Syrian (c. 310-373 C.E.) produced a commentary on the Diatessaron, thus confirming that it was in general use among Syrian Christians.

The Diatessaron is of great interest to us today. Why? In the 19th century, some scholars argued that the Gospels were written as late as the second century, between 130 C.E. and 170 C.E., and thus could not be authentic accounts of Jesus’ life. However, ancient manuscripts of the Diatessaron that have come to light since then have proved that the Gospels of Matthew, Mark, Luke, and John were already in wide circulation by the middle of the second century. They must therefore have been written earlier. In addition, since Tatian, when compiling the Diatessaron, did not make use of any of the so-called apocryphal gospels in the way he did the four accepted Gospels, it is evident that the apocryphal gospels were not viewed as reliable or canonical.

By the start of the fifth century, a translation of the Bible into Syriac came into general use in northern Mesopotamia. Likely made during the second or third century C.E., this translation included all the books of the Bible except 2 Peter, 2 and 3 John, Jude, and Revelation. It is known as the Peshitta, meaning “Simple” or “Clear.” The Peshitta is one of the oldest and most important witnesses to the early transmission of the Bible text.

Interestingly, one manuscript of the Peshitta has a written date corresponding to 459/460 C.E., making it the oldest Bible manuscript with a definite date. In about 508 C.E., a revision of the Peshitta was made that included the five missing books. It came to be known as the Philoxenian Version.


Syriac Peshitta of the Pentateuch, 464 C.E., the second-oldest dated manuscript of Bible text

Until the 19th century, almost all the known Greek copies of the Christian Greek Scriptures were from the fifth century or much later. For this reason, Bible scholars were especially interested in such early versions as the Latin Vulgate and the Syriac Peshitta. At the time, some believed that the Peshitta was the result of a revision of an older Syriac version. But no such text was known. Since the roots of the Syriac Bible go back to the second century, such a version would provide a window on the Bible text at an early stage, and it would surely be invaluable to Bible scholars! Was there really an old Syriac version? Would it be found?


The palimpsest called the Sinaitic Syriac. Visible in the margin is the underwriting of the Gospels

Yes, indeed! In fact, two such precious Syriac manuscripts were found. The first is a manuscript dating from the fifth century. It was among a large number of Syriac manuscripts acquired by the British Museum in 1842 from a monastery in the Nitrian Desert in Egypt. It was called the Curetonian Syriac because it was discovered and published by William Cureton, the museum’s assistant keeper of manuscripts. This precious document contains the four Gospels in the order of Matthew, Mark, John, and Luke.

The second manuscript that has survived to our day is the Sinaitic Syriac. Its discovery is linked with the adventurous twin sisters mentioned at the start of this article. Although Agnes did not have a university degree, she learned eight foreign languages, one of them Syriac. In 1892, Agnes made a remarkable discovery in the monastery of St. Catherine in Egypt.

 There, in a dark closet, she found a Syriac manuscript. According to her own account, “it had a forbidding look, for it was very dirty, and its leaves were nearly all stuck together through their having remained unturned” for centuries. It was a palimpsest * manuscript of which the original text had been erased and the pages rewritten with a Syriac text about female saints. However, Agnes spotted some of the writing underneath and the words “of Matthew,” “of Mark,” or “of Luke” at the top. What she had in her hands was an almost complete Syriac codex of the four Gospels! Scholars now believe that this codex was written in the late fourth century.

The Sinaitic Syriac is considered one of the most important Biblical manuscripts discovered, right along with such Greek manuscripts as the Codex Sinaiticus and the Codex Vaticanus. It is now generally believed that both the Curetonian and Sinaitic manuscripts are extant copies of the old Syriac Gospels dating from the late second or early third century.

“THE WORD OF OUR GOD ENDURES FOREVER”

Can these manuscripts be useful to Bible students today? Undoubtedly! Take as an example the so-called long conclusion of the Gospel of Mark, which in some Bibles follows Mark 16:8. It appears in the Greek Codex Alexandrinus of the fifth century, the Latin Vulgate, and elsewhere. However, the two authoritative fourth-century Greek manuscripts—Codex Sinaiticus and Codex Vaticanus—both end with Mark 16:8. The Sinaitic Syriac does not have this long conclusion either, adding further evidence that the long conclusion is a later addition and was not originally part of Mark’s Gospel.

Consider another example. In the 19th century, almost all Bible translations had a spurious Trinitarian addition at 1 John 5:7. However, this addition does not appear in the oldest Greek manuscripts. Neither does it appear in the Peshitta, thus proving that the addition at 1 John 5:7 is indeed a corruption of the Bible text.

Clearly, as promised, Jehovah God has preserved his Holy Word. In it we are given this assurance: “The green grass dries up, the blossom withers, but the word of our God endures forever.” (Isaiah 40:8; 1 Peter 1:25) The version known as the Peshitta plays a humble but important role in the accurate transmission of the Bible’s message to all of humanity.

The big questions remain as big as ever?

 A New Look at Three Deep Questions


Ron Coody’s new book, Almost? Persuaded! Why Three Great Questions Resist Certainty, delivers a wide-ranging discussion and analysis of questions, answers, and arguments keenly relevant to the intelligent design community. His background is far from one-dimensional and he has long been engaging people over issues of worldview, evidence, and belief.

With a bachelor’s degree in microbiology and a Master of Divinity followed by a PhD in missiology, Coody is well qualified to address the cutting edges of science, philosophy, and theology. Enhancing his perception of diverse ways of thinking about these questions is his decades-long experience of living and working cross-culturally.

Questions of Consequence

The primary questions addressed here are obviously of deep consequence: Does God exist? Where did life come from? and Is free will real? A refreshing aspect of Almost? Persuaded! is its objective coverage of the broad range of arguments surrounding these questions. 

As I read Almost? Persuaded!, although I have been studying these questions for many years, I found that Coody’s presentation easily held my attention. Moreover, the breadth of his analysis provides new insights and expanded my understanding of developments in history and philosophy.

A Helpful Compendium

On the first question, “Does God Exist?”, Coody’s analytical summary of key philosophers and intellectuals, from Plato to Aquinas to Dawkins, caught my attention. His highlighting of key ideas from over twenty influential thinkers makes for a helpful compendium.

A familiar-sounding argument for design is Coody’s summary of number five of Thomas Aquinas’ Five Ways argument from the 13th century:

Working backwards from human experience of designing and building, Aquinas reasoned that the ordered universe and the creatures inhabiting it exhibit properties of design. Design requires a designer….Aquinas thought that the universe needed an intelligent mind to bring it into order. He believed that physical laws lacked the power to organize complex, functioning systems. 

P. 34

Another unique and somewhat amusing contribution is the author’s contrasting of Richard Dawkins with the Apostle Paul on the evidential weight of nature.

As Coody reviews the standard evidence for the fine-tuning of the physical parameters of the universe to allow life to exist, his presentation is accurate and compelling. The Big Bang, Lawrence Krauss’s attempts to redefine the “nothingness” out which the universe arose, Stephen Hawking’s blithe dismissal of the significance of the beginning with an invocation of gravity, and the counterpoint from Borde, Guth, and Vilenkin’s singularity theorem, are knit together in readable prose.

Encouragement for Curiosity

When it comes to the possibility of life forming itself naturally, again Coody gives an informative and insightful overview. Although, like the rest of us, he has his own convictions, he is willing to acknowledge the tension surrounding differing conclusions among those seeking to evaluate the evidence. He encourages the reader to persist in seeking answers: “Honest people of any faith or no faith should be interested in the truth. ” (p. 164)

The final section provides an enlightening discussion of free will. Coody captures the major issues: “Is free will an illusion created by the brain? In reality do we have any more free will than our computer?….Is the mind the same as the brain or is the mind something spiritual?” (p. 180)

Delving into the implications of materialistic determinism, and even quantum uncertainty, Coody provides a fresh look at the subject. In an illustration that is beguilingly simple, he borrows from the classic fairy tale of Pinocchio. His summary cuts deeply into one of the major shortcomings of materialist thinking: “On their view of the world, there was never any difference between the wooden Pinocchio and the human Pinocchio. Both were simply animated, soulless, material objects.” (p. 191)

Readers of almost any background will find much here that informs, provokes deeper reflection, and provides refreshing and novel illustrations relevant to the discussion of some of life’s most enduring questions

There is nothing simple about this beginning?

 Getting It Together: Tethers, Handshakes, and Multitaskers in the Cell


Running a cell requires coordination. How do molecules moving in the dark interior of a cell know how and when to connect? Protein tethers offer new clues, according to research at Philipps University in Marburg, Germany.

The ways that organelles and proteins connect at the right place and time are coming to light. One method is to encapsulate interacting molecules within compartments called condensates, droplets, and speckles. Like offices or cubicles where employees can talk without excess noise, these temporary spaces allow molecules to interact in peace (see “Caltech Finds Amazing Role for Noncoding DNA”). 

Another method for coordination of moving parts involves tethers. Certain molecular machines use “two hands” to bring other molecules or organelles together. Visualize a person taking a stranger’s hand and using her other hand to grasp a doorknob, leading the stranger to the place he needs to be. Many protein machines have a critical binding site for their targets, but these “dual affinity” tethering machines contain two different recognition sites on different domains that recognize separate targets needing to come together. Such multitasking machines are marvelously designed to promote fellowship for effective interactions in the cellular city.

A similar phenomenon has long been known in DNA translation. A set of molecules called aminoacyl-tRNA synthetases brings dissimilar molecules together. One synthetase feels the anticodon on its matching transfer RNA (tRNA) and then puts the corresponding amino acid on the opposite end. Like a language translator, each synthetase needs to know two languages — the DNA code and the protein code — to equip the tRNA with the correct amino acid. As the activated tRNA enters the ribosome, its anticodon base pairs with the complementary codon on the messenger RNA at one end, and its amino acid fits onto the growing polypeptide chain on the other end. This is a spectacular example of double duty, multitasking know-how. But is it the only one?

Another Example of Double Duty

A team of 15 researchers publishing in PLOS Biology under lead author Elena Bittner, also from Philipps University, and colleagues at Berkeley and Howard Hughes, has just reported a case of a multitasking machine that bridges dissimilar targets — in this case, peroxisomes with mitochondria or the endoplasmic reticulum (ER). It may not be the only case of “Proteins that carry dual targeting signals [that] can act as tethers between” organelles, they say:

Peroxisomes are organelles with crucial functions in oxidative metabolism. To correctly target to peroxisomes, proteins require specialized targeting signals. A mystery in the field is the sorting of proteins that carry a targeting signal for peroxisomes and as well as for other organelles, such as mitochondria or the endoplasmic reticulum (ER). Exploring several of these proteins in fungal model systems, we observed that they can act as tethers bridging organelles together to create contact sites. 

Take note that they found this in yeast, the simplest of eukaryotes.

We show that in Saccharomyces cerevisiae this mode of tethering involves the peroxisome import machinery, the ER–mitochondria encounter structure (ERMES) at mitochondria and the guided entry of tail-anchored proteins (GET) pathway at the ER. 

Why is this significant? 

Our findings introduce a previously unexplored concept of how dual affinity proteins can regulate organelle attachment and communication.

Previously unexplored: this sounds like a game changer. How does this “tethering” system work? After all the biochemistry work by the team is shown, demonstrating the dual-targeting capability, they illustrated it with a simplified diagram in Figure 10 in their open-access paper. As usual, even in simplified form, the system involves numerous other factors. The upshot is described as follows:

We have found that distinct proteins with targeting signals for 2 organelles can affect proximity of these organelles. This conclusion is supported by the notion that different types of dual affinity proteins can act as contact-inducing proteins (Fig 10) … Although dual affinity proteins are a challenge for maintaining organelle identity, they are ideally suited to support organelle interactions by binding to targeting factors and membrane-bound translocation machinery of different organelles. Dually targeted proteins appear to concentrate in regions of organelle contact, which may coincide with regions of reduced identity.

Within the mitochondria, we already met TIM and TOM, the channel guards who check the credentials of proteins entering the organelle’s outer and inner membranes. (The authors note that these translocase proteins are “evolutionarily conserved.”) But outside the mitochondrion, proteins needing to enter or exit have to find their way to the guards. That’s where the “dual affinity proteins” operate. 

What Do the Tethers Look Like?

Ptc5 is one of these tethering proteins, one of many that “contain targeting signals for mitochondria and peroxisomes at opposite termini.” Its Peroxisome Targeting Signal (PTS) recognizes the peroxisome at one end, and its Mitochondrial Targeting Signal (MTS) recognizes TOM at the mitochondrial channel. Experimenting with mutant strains of this and associated proteins and chaperones, the researchers confirmed that Ptc5 does tether peroxisomes to mitochondria. Moreover, its activity is dependent on need. “In aggregate,” they write, these data show that tethering via dual affinity proteins is a regulated process and depends on the metabolic state of the cell.” This implies the additional capability of sensing the fluctuating metabolic need.

The authors didn’t have much to say about evolution. As usual, it involved copious amounts of speculation.

While many peroxisomal membrane proteins can target peroxisomes without transitioning through the ER, several peroxisomal membrane proteins have evolved to be synthesized in vicinity to the ER and may translocate from it.

Other than TOM and TIM being “evolutionarily conserved,” that was all they had to offer Darwin.

A New Class of Activity Coordinators

What Bittner et al. have identified is probably the trigger for a paradigm shift concerning methods that cells use to get components together.

We conclude that dually targeted cargo includes a diverse and unexpected group of tethers, which are likely to maintain contact as long as they remain accessible for targeting factors at partner organelles. Coupling of protein and membrane trafficking is a common principle in the secretory pathway and it might also occur for peroxisomes at different contact sites.

And so, what lies ahead? Design proponents in biochemistry and molecular biology, play tetherball! Here is a potentially fruitful area for new discoveries.

How dually targeted proteins and their rerouting affect the flux of molecules other than proteins, e.g., membrane lipids remains a topic for future research.