Search This Blog

Friday, 1 March 2024

A theory of everything re:design detection?

 Specified Complexity as a Unified Information Measure


With the publication of the first edition of my book The Design Inference and its sequel No Free Lunch, elucidating the connection between design inferences and information theory became increasingly urgent. That there was a connection was clear. The first edition of The Design Inference sketched, in the epilogue, how the relation between specifications and small probability (complex) events mirrored the transmission of messages along a communication channel from sender to receiver. Moreover, in No Free Lunch, both Shannon and Kolmogorov information were explicitly cited in connection with specified complexity — which is the subject of this series.

But even though specified complexity as characterized back then employed informational ideas, it did not constitute a clearly defined information measure. Specified complexity seemed like a kludge of ideas from logic, statistics, and information. Jay Richards, guest-editing a special issue of Philosophia Christi, asked me to clarify the connection between specified complexity and information theory. In response, I wrote an article titled “Specification: The Pattern That Signifies Intelligence,” which appeared in that journal in 2005

A Single Measure

In that article, I defined specified complexity as a single measure that combined under one roof all the key elements of the design inference, notably, small probability, specification, probabilistic resources, and universal probability bounds. Essentially, in the measure I articulated there, I attempted to encapsulate the entire design inferential methodology within a single mathematical expression. 

In retrospect, all the key pieces for what is now the fully developed informational account of specified complexity were there in that article. But my treatment of specified complexity there left substantial room for improvement. I used a counting measure to enumerate all the descriptions of a given length or shorter. I then placed this measure under a negative logarithm. This gave the equivalent of Kolmogorov information, suitably generalized to minimal description length. But because my approach was so focused on encapsulating the design-inferential methodology, the roles of Shannon and Kolmogorov information in its definition of specified complexity were muddied. 

My 2005 specified complexity paper fell stillborn from the press, and justly so given its lack of clarity. Eight years later, Winston Ewert, working with Robert Marks and me at the Evolutionary Informatics Lab, independently formulated specified complexity as a unified measure. It was essentially the same measure as in my 2005 article, but Ewert clearly articulated the place of both Shannon and Kolmogorov information in the definition of specified complexity. Ewert, along with Marks and me as co-authors, published this work under the title “Algorithmic Specified Complexity,” and then published subsequent applications of this work (see the Evolutionary Informatics Lab publications page). 

With Ewert’s lead, specified complexity, as an information measure, became the difference between Shannon information and Kolmogorov information. In symbols, the specified complexity SC for an event E was thus defined as SC(E) = I(E) – K(E). The term I(E) in this equation is just, as we saw in my last article, Shannon information, namely, I(E) = –log(P(E)), where P(E) is the probability of E with respect to some underlying relevant chance hypothesis. The term K(E) in this equation, in line with the last article, is a slight generalization of Kolmogorov information, in which for an event E, K(E) assigns the length, in bits, of the shortest description that precisely identifies E. Underlying this generalization of Kolmogorov information is a binary, prefix-free, Turing complete language that maps descriptions from the language to the events they identify. 

Not Merely a Kludge

There’s a lot packed into this last paragraph, so explicating it all is not going to be helpful in an article titled “Specified Complexity Made Simple.” For the details, see Chapter 6 of the second edition of The Design Inference. Still, it’s worth highlighting a few key points to show that SC, so defined, makes good sense as a unified information measure and is not merely a kludge of Shannon and Kolmogorov information. 

What brings Shannon and Kolmogorov information together as a coherent whole in this definition of specified complexity is event-description duality. Events (and the objects and structures they produce) occur in the world. Descriptions of events occur in language. Thus, corresponding to an event E are descriptions D that identify E. For instance, the event of getting a royal flush in the suit of hearts corresponds to the description “royal flush in the suit of hearts.” Such descriptions are, of course, never unique. The same event can always be described in multiple ways. Thus, this event could also be described as “a five-card poker hand with an ace of hearts, a king of hearts, a queen of hearts, a jack of hearts, and a ten of hearts.” Yet this description is quite a bit longer than the other. 

Given event-description duality, it follows that: (1) an event E with a probability P(E) has Shannon information I(E), measured in bits; moreover, (2) given a binary language (one expressed in bits — and all languages can be expressed in bits), for any description D that identifies E, the number of bits making up D, which in the last section we defined as |D|, will be no less than the Kolmogorov information of E (which measures in bits the shortest description that identifies E). Thus, because K(E) ≤ |D|, it follows that SC(E) = I(E) – K(E) ≥ I(E) – |D|. 

The most important take away here is that specified complexity makes Shannon information and Kolmogorov information commensurable. In particular, specified complexity takes the bits associated with an event’s probability and subtracts from it the bits associated with their minimum description length. Moreover, in estimating K(E), we then use I(E) – |D| to form a lower bound for specified complexity. It follows that specified complexity comes in degrees and could take on negative values. In practice, however, we’ll say an event exhibits specified complexity if it is positive and large (with what it means to be large depending on the relevant probabilistic resources). 

The Kraft Inequality

There’s a final fact that makes specified complexity a natural information measure and not just an arbitrary combination of Shannon and Kolmogorov information, and that’s the Kraft inequality. To apply the Kraft inequality of specified complexity here depends on the language that maps descriptions to events being prefix-free. Prefix-free languages help to ensure disambiguation, so that one description is not the start of another description. This is not an onerous condition, and even though it does not hold for natural languages, transforming natural languages into prefix-free languages leads to negligible increases in description length (again, see Chapter 6 of the second edition of The Design Inference). 

What the Kraft inequality does for the specified complexity of an event E is guarantee that all events having the same or greater specified complexity, when considered jointly as one grand union, nonetheless have probability less than or equal to 2 raised to the negative power of the specified complexity. In other words, the probability of the union of all events F with specified complexity no less than that of E (i.e., SC(F) ≥ SC(E)), will have probability less than or equal to 2^(–SC(E)). This result, so stated, may not seem to belong in a series of articles attempting to make specified complexity simple. But it is a big mathematical result, and it connects specified complexity to a probability bound that’s crucial for drawing design inferences. To illustrate how this all works, let’s turn next to an example of cars driving along a road.

No comments:

Post a Comment