Search This Blog

Sunday 3 February 2019

Toward a testable design filter? II

Unifying Specified Complexity: Rediscovering Ancient Technology
Evolution News @DiscoveryCSC

Editor’s note: We have been reviewing and explaining a new article in the journal BIO-Complexity, “A Unified Model of Complex Specified Information,” by George D. Montañez. For earlier posts, see:
Specified complexity, the property of being both unlikely and functionally specified, was introduced into the origins debate two decades ago by William Dembski by way of his book, The Design Inference. In it, he developed a theory of design detection based on observing objects that were both unlikely and matched an independently given pattern, called a specification. Dembski continued to refine his vision of specified complexity, introducing variations of his model in subsequent publications (Dembski 2001, 2002, 2005). Dembski’s independent work in specified complexity culminated with a semiotic specified complexity model (Dembski 2005), where functional specificity was measured by how succinctly a symbol-using agent could describe an object in the context of the linguistic patterns available to the agent. Objects that were complex yet could be simply described resulted in high specified complexity values.

Although Dembski’s work on specified complexity became the most widely known, bioinformatics specialist Aleksandar Milosavljević appears to have developed the first fully mathematical specified complexity model with his algorithmic significance method (Milosavljević 1993, 1995). Milosavljević presented his work in the early 1990s, which by tech standards, is ancient times. His specified complexity model used algorithmic information theory to test independence between DNA sequences based on the improbability of encountering a sequence under some probability distribution and the length of the sequence’s compressed encoding relative to a second sequence. A similar method of measuring specified complexity was later independently rediscovered (as great ideas often are) by Ewert, Marks, and Dembski with their algorithmic specified complexity model (Ewert, Dembski, and Marks II 2012, 2015).

Given Milosavljević’s early work with algorithmic significance, mathematical specified complexity models have successfully been used in fields outside of intelligent design for a quarter of a century. A new paper, published in the open-access journal BIO-Complexity, aims to push forward the development of specified complexity methods by developing a detailed mathematical theory of complex specified information.

Unified Models of Specified Complexity

In “A Unified Model of Complex Specified Information,” George D. Montañez introduces a new framework that brings together various specified complexity models by uncovering a shared mathematical identity between them. This shared identity, called the common form, consists of three main components, combined into what is called a kardis function. The components are:

a probability term, p(x),
a specification term, ν(x), and
a scaling constant, r.
For an object x, the first of these gives a sense of how likely the object is to be generated by some probabilistic process modeled by p. When this value is low, the object is not one that is often generated by the process. The specification term, ν(x), captures to what degree x conforms to an independently given specification, modeled as a nonnegative function over the (typically restricted) space of possible objects. When this value is large, the object is considered highly specified. Lastly, the scaling constant r (also called the “replicational resources”) can be interpreted as a normalization factor for the specification values (rescaling the values to some predefined range) or as the number of “attempts” the probabilistic process is given to generate the object in question. (The paper discusses in detail both interpretations of the scaling constant.) Given these components, the kardis function κ(x) is defined as

κ(x) = r [p(x) / ν(x)].

Taking the negative log, base-2, of κ(x) defines the common form for specified complexity models.

Common Form Models

The paper presents Dembski’s semiotic specified complexity and Ewert et. al’s algorithmic specified complexity as common form models, mapping the parts of each model to kardis components. This mapping is done for additional specified complexity models, as well.

Dembski’s semiotic model contains three core components (a probability term P(T|H), specification term φS(T), and scaling constant 10120), which can be mapped to kardis components as p(x) = P(T|H), ν(x) = φS(T)-1, and r = 10120. Dembski defines his specified complexity as

χ = -log2[10120φS(T)P(T|H)] = -log2κ(x),

which we see is a common form model with x = T.

Similarly, Ewert et al.’s algorithmic specified complexity contains a probability term p(x), a specification term ν(x) = 2–K(x|c), and an implicit scaling term r = 1, making it a common form model.

Lastly, Milosavljević’s algorithmic significance model is also of common form, with a kardis containing probability term p0(x), specification term 2-IA(x|s), and implicit scaling constant r = 1. Through this mapping, the connection to algorithmic specified complexity becomes clear, and the model’s status as a form of specified complexity becomes indisputable.

Canonical Specified Complexity

For any common form model, adding a constraint that r is at least as large as the sum (or integral) of specification values over the entire domain of ν, we obtain a canonical specified complexity model. The paper primarily works with canonical models, proving theorems related to them, although some results are also given for simple common form models. Tweaks to some common form models (such as Demsbki’s semiotic model and Hazen et al.’s functional information model) allow them to become canonical model variants, to which the theorems derived in the paper apply. Canonical models represent a subset of common form models, and have several interesting properties. These properties include the scarcity of large values, such that under any fixed random or semi-random process the probability of observing large values is strictly bounded (and exponentially small, when large value observations are desired). The paper gives further detail, for those interested.

The Power of a Good Abstraction

What does it mean for existing specified complexity models to all share a single underlying form? First, it allows us to reason about many specified complexity models simultaneously and prove theorems for them collectively. It allows us to better understand each model, since we can relate it to other specified complexity models. Second, it hints strongly that any attempt to solve the problem of measuring anomalous events will converge on a similar solution, increasing our confidence that the common form represents the solution to the problem. Third, we can build from a simplified framework, clearing away incidental details to focus on the behavior of specified complexity models at their core essence.

Finally, having discovered the common form parameterization, we can establish that Milosavljević’s algorithmic significance model is not just like a specified complexity model, but is a specified complexity model, definitively refuting claims that specified complexity methods have no practical value, are unworkable, or have not been used in applied fields like machine learning or bioinformatics. We have now come to discover that they’ve been in use for at least 25 years. Milosavljević couldn’t access the vocabulary of common forms and canonical models, so what he saw as the difference between a surprisal term and its compressed relative encoding, we now more clearly see as a compression-based canonical specified complexity model.

Symbols in Steel and Stone

Returning to your winter retreat, mentioned in the last post, the symbols you discovered remain on your mind. A portion of the symbols, those on the metal pieces, you’ve been able to map to numbers coded in a base-7 number system. Your conviction is strengthened once you realize the numbers include a sequence of prime digits, spanning from 2 to 31. You imagine that some ancient mathematician etched the symbols into the metal, someone that either had knowledge of the primes or did not. If they did not, there would be some probability p(x) that they produced the sequence xwithout intention.
Given that the sequence also matches an independent pattern (primes), you ask yourself, how many sequences using the first thirty-one positive integers would match any recognizable numeric pattern, of which the primes are just one example? The on-line encyclopedia of integer sequences has an estimated 300,000 sequences which could serve as a pattern (to someone more knowledgeable than yourself). You imagine that perhaps this number underestimates the number of interesting patterns, so you double it to be safe, and assume 600,000 possible matchable patterns, of which the prime sequence is just one instance.
You’ve spent time studying the manuscript on specified complexity you brought along with you, and are eager to understand your discovery in light of the framework it presents, that of canonical specified complexity. You let the space of possible sequences be all the 3111 sequences of length 11 using the first 31 positive integers. You let ν(x) equal to one whenever sequence x exists in the OEIS repository (representing an “interesting” number pattern), and upper bound r by 600,000, the number of interesting patterns that possible sequences could match. You know these are rough estimates which will undoubtedly need to be revised in the future, but you’d like to get a notion for just how anomalous the sequence you’ve discovered actually is. Your instinct tells you “very,” but mapping your quantities to a canonical model kardis gives you the first step along a more objective path, and you turn to the paper to see what you can infer about the origin of your sequence based on your model. You have much work ahead, but after many more hours of study and reflection, the darkening night compels you to set aside your workbook and get some rest.

Bibliography

Dembski, William A. 2001. “Detecting Design by Eliminating Chance: A Response to Robin Collins.” Christian Scholar’s Review 30 (3): 343–58.
———. 2002. No Free Lunch: Why Specified Complexity Cannot Be Purchased Without Intelligence. Lanham: Rowman & Littlefield.
———. 2005. “Specification: The Pattern That Signifies Intelligence.” Philosophia Christi 7 (2): 299–343. https://doi.org/10.5840/pc20057230.
Ewert, Winston, William A Dembski, and Robert J Marks II. 2012. “Algorithmic Specified Complexity.” Engineering and Metaphysicshttps://doi.org/10.33014/isbn.0975283863.7.
———. 2015. “Algorithmic Specified Complexity in the Game of Life.” IEEE Transactions on Systems, Man, and Cybernetics: Systems 45 (4): 584–94. https://doi.org/10.1109/TSMC.2014.2331917.
Milosavljević, Aleksandar. 1993. “Discovering Sequence Similarity by the Algorithmic Significance Method.” Proc Int Conf Intell Syst Mol Biol 1: 284–91.
———. 1995. “Discovering Dependencies via Algorithmic Mutual Information: A Case Study in DNA Sequence Comparisons.” Machine Learning 21 (1-2): 35–50. https://doi.org/10.1007/BF00993378.

Finally,something we all agree on:The war on science is real.

Fetal Pain – Another Case Where the “Science Denial” Insult Has Been Misapplied
David Klinghoffer | @d_klinghoffer

In a New York Times op-ed, law professor Mary Ziegler lashes pro-life advocates for claiming to have “science on their side” and “praising legal restrictions based on what science supposedly says about fetal pain.”

“Supposedly”? It’s a blatant untruth that “fetuses cannot feel pain,” as  neuroscientist Michael Egnor explains at Mind MattersQuite the contrary, fetuses are more sensitive to pain than mature humans like you and me. This makes sense if you think about it for a moment, especially if you’re a parent: sensations that we would shrug off, babies find excruciating and intolerable. Did you think this sensitivity pops into being out of nowhere at birth?

A Timely Reminder

From “The Junk Science of the Abortion Lobby,” a particularly timely reminder as New York State celebrates its barbaric new abortion law:

The science of fetal pain is also well established. The core of the abortionists’ argument against the fact that young fetuses in the womb feel pain is the immaturity of the thalamocortical connections in the fetal nervous system. Because of this neurological immaturity, pro-abortionists claim, fetuses cannot feel pain. This claim is, however, a profound misrepresentation of neuroscience and embryology. Thalamocortical projections continue to mature throughout childhood and into early adult life — they are not fully mature until about 25 years of age. Yet children obviously feel pain, so the immaturity of thalamocortical projections does not in any way preclude the experience of pain.

In fact, pain is unlike any other sensory modality in that pain appears to enter consciousness awareness at subcortical (probably thalamic) levels. The cerebral cortex is not necessary to experience pain — it appears to modulate the experience of pain. Furthermore, cortical modulation of pain serves to diminish the severity of the pain. Decorticate animals (animals whose cerebral cortex has been removed) appear to experience pain more intensely than corticate animals do.
Babies obviously experience pain, and indeed appear to experience it more intensely than do adults. A bit of intestinal gas or a needle prick to draw blood can cause an infant to scream in agony. This extreme sensitivity to pain in young fetuses and babies is well-recognized in medical science1-3 and forms the basis for the anesthetic management of fetuses and young infants.

Read the rest at Mind MattersThe “anti-science” aka “science denial” label is slung around a lot, mostly at those who fail to line up with the expected progressive viewpoint on any given issue. As Dr. Egnor notes, this is another case where the insult has been misapplied.

Saturday 19 January 2019

Toward a testable design filter?

Measuring Surprise — A Frontier of Design Theory
Evolution News @DiscoveryCSC


The sunlight shines bright on the cold winter’s morning as you begin your trek towards the retreat. Snow covers the ground and steam from your breath rises ahead of you. Accompanying you is Bertrand, your Russell terrier, who runs ahead of you jumping in the snow. Chasing a bird, he climbs over a hill as you call after him, but he is too focused on the pursuit to heed you. 

Clumsily chasing after him you come upon a strange looking stone protruding from one of the rock faces. Its odd shape catches your eye, as does its relatively smooth surface. There appear to be runes carved its surface, though you aren’t sure, since you don’t recognize the symbols or know of any literate ancient cultures from the area. 

You decide to leave the stone as you found it, but mark its location and pull a notepad from your backpack to sketch the stone with its symbols. Bertrand, tired from his chase, joins you and begins digging nearby, where he unearths what appears to be a piece of aged metal, again with symbols you do not recognize. The symbols differ from those carved in the rock, are more refined, and almost appear to be numeric. 

Gently moving more earth, you discover a second piece of twisted metal, and you add drawings of these pieces to your sketchbook, resisting the urge to take the pieces with you. After sketching, you continue your trek towards your retreat. On arriving, you contact the local university about your discovery, helping them to locate the artifacts on the following day.

You’ve come to the retreat to study. You’ve brought several books from your office, along with a manuscript on the subject of complex specified information. As you read the manuscript, you begin applying the ideas to your discovery in the hills. What could have created the carvings? 

The carvings look sustained (there are many of them) and deliberate, unlike creases created by splitting and pitting of surfaces over ages. You’re no geologist, but you are also no stranger to rock surfaces, possessing a mature mental model of the types of patterns that can be expected to appear on stone faces. The patterns are geometric but irregular, complex and without any apparent repetition, unlike other geological anomalies such as the Giant’s Causeway of Ireland. 

The runes were most likely carvings, made by people in some unknown past. Could you compute some estimates to how likely a series of runes like this (or in any other symbol system) would be to appear as a process of weathering? That seems like a challenging task, but the metal pieces present perhaps a less formidable challenge, since you are almost certain they represent numbers. 

You set out to discover whether you can quantify your intuition that the carvings are special, using the tool of specified complexity.

Unlikely Yet Structurally Organized

What is specified complexity? Almost a decade before the discovery of the structure of the DNA molecule, physicist Erwin Schrödinger predicted that hereditary material must be stored in what he called an aperiodic crystal, stable yet without predictable repetition, since predictable repetition would greatly reduce its information carrying capacity (Schrödinger 1944). 

Starting from first principles, he reasoned that life would need an informational molecule that could take on a large number of possible states without strong bias towards any one particular state (thus making individual states improbable), yet needed structural stability to counteract the forces of Brownian motion within cells (thus making the molecule match a functional specification of being structurally organized). 

This combination of unlikely objects that simultaneously match a functional specification later came to be known as specified complexity (Dembski 1998; Dembski 2001; Dembski 2002; Dembski 2005; Ewert, Dembski, and Marks II 2012). Specified complexity has been proposed as a signal of design (Dembski 1998; Dembski 2001; Dembski 2002). An object exhibiting specified complexity is unlikely to have been produced by the probabilistic process under which it is being measured and it is also specified, matching some independently given pattern called a specification. More precisely, the degree to which an object meets some independently defined criterion in a way that not many objects do is the degree to which the object can be said to be specified. 

Because complex objects typically contain many parts, each of which makes the overall probability of the object being encountered less likely, the improbability aspect has historically been referred to as the complexity of the object (though, improbability would perhaps be more fitting). Therefore, specified complex objects are those that are both unlikely and functionally specified, often having to meet minimum thresholds in both categories.

Quantifying Surprise

Specified complexity allows us to measure how surprising random outcomes are, in reference to some probabilistic model. But there are other ways of measuring surprise. In Shannon’s celebrated information theory (Shannon 1948), improbability alone can be used to measure the surprise of observing a particular random outcome, using the quantity of surprisal, which is simply the negative logarithm (base 2) of the probability of observing the outcome, namely,

-log2p(x)

where x is the observed outcome and p(x) is the probability of observing it under some distribution p. Unlikely outcomes generate large surprisal values, since they are in some sense unexpected.

But let us consider a case where all events in a set of possible outcomes are equally very unlikely. (This can happen when you have an extremely large number of equally possible outcomes, so that each of them individually has a small chance of occurring.) 

Under these conditions, asking “what is the probability that an unlikely event occurs?” yields the somewhat paradoxical answer that it is guaranteed to occur! Some outcome must occur, and since each of them is unlikely, an unlikely event (with large surprisal) is guaranteed to occur. Therefore, surprisal alone cannot tell us how likely we are to witness an outcome that surprises us.

As a concrete example, consider any sequence of one hundred coin flips generated by flipping a fair coin. Every sequence has an equal probability of occurring, giving the same surprisal for each possible sequence. Therefore a sequence of all heads has the exact same surprisal as a random sequence of one hundred zeros and ones, even though the former is surely more surprising than the latter under a fair coin model.

We need another way to capture what it means for an outcome to be special and surprising, one that would allow us to say a sequence of all heads generated by a fair coin is surprising, but a sequence of randomly mixed zeros and ones is not. Specified complexity provides a mathematical means of doing so, by combining a surprisal term with a specification term, allowing us to precisely determine how surprising it is to witness an outcome of one hundred heads in a row assuming a fair coin.

Diving into Specified Complexity

How does specified complexity allow us to do this? A recently published paper in BIO-Complexity, “A Unified Model of Complex Specified Information” by machine learning researcher George D. Montañez, offers some insight. For a reader-friendly summary see, “BIO-Complexity Article Offers an Objective Method for Weighing Darwinian Explanations.”

The paper, which is mathematical in nature, ties together several existing models of specified complexity and introduces a canonical form for which objects exhibiting large specified complexity values are unlikely (surprising!) under any given distribution. Montañez builds on much previous work, fleshing out the equivalence between specified complexity testing and p-value hypothesis testing introduced by A. Milosavljević (Milosavljević 1993; Milosavljević 1995) and later William Dembski (Dembski 2005), and giving bounds on the probability of encountering large specified complexity values for existing specified complexity models. 

The paper defines new canonical specified complexity model variants, and gives a recipe for creating specified complexity models using specification functions of your choice. It lays out a framework for reasoning quantitatively about what it means for a probabilistic outcome to be genuinely surprising, and explores what implications this has for technology and for explanations of observed outcomes.

We’ll have more to say about this important paper, which represents a frontier for the theory of intelligent design. Stay tuned.

Bibliography

Dembski, William A. 1998. The Design Inference: Eliminating Chance Through Small Probabilities. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9780511570643.
———. 2001. “Detecting Design by Eliminating Chance: A Response to Robin Collins.” Christian Scholar’s Review 30 (3): 343–58.
———. 2002. No Free Lunch: Why Specified Complexity Cannot Be Purchased Without Intelligence. Lanham: Rowman & Littlefield.
———. 2005. “Specification: The Pattern That Signifies Intelligence.” Philosophia Christi 7 (2): 299–343. https://doi.org/10.5840/pc20057230.
Ewert, Winston, William A Dembski, and Robert J Marks II. 2012. “Algorithmic Specified Complexity.” Engineering and Metaphysicshttps://doi.org/10.33014/isbn.0975283863.7.
Milosavljević, Aleksandar. 1993. “Discovering Sequence Similarity by the Algorithmic Significance Method.” In ISMB, 284–91.
———. 1995. “Discovering Dependencies via Algorithmic Mutual Information: A Case Study in Dna Sequence Comparisons.” Machine Learning 21 (1-2): 35–50.
Schrödinger, Erwin. 1944. What Is Life? The Physical Aspect of the Living Cell and Mind. Cambridge: Cambridge University Press.
Shannon, Claude Elwood. 1948. “A Mathematical Theory of Communication.” Bell System Technical Journal 27 (3): 379–423.
Photo credit: A stone carved with ancient runes, by Lindy Buckley, via Flickr (cropped).

The iron lady saved Britain?:Pros and cons.

Darwinian apologists are pounding the table again.

Fact-Check: Louisiana's Science Education Act Does NOT Authorize Teaching Creationism
Sarah Chaffee 

In an article at Vox, a website the offers to "explain the news" for readers, Sean Illing shares an interview with science educator Amanda Glaze. Unfortunately, in "Teaching evolution in the South: an educator on the 'war for science literacy,'" he repeats the mistake of many media sources, mischaracterizing an academic freedom law as authorizing instructors to teach creationism.

I lived and taught in Louisiana until recently, and there you had a well-educated Republican governor [Bobby Jindal] who was backing a law that allowed creationism to be taught in public school science classes. And he had the overwhelming support of the state legislature.

This is incorrect. Permit me to explain to Sean Illing. The law that he refers to, the Louisiana Science Education Act (LSEA), does not authorize the teaching of creationism. Rather, it permits teachers to present the scientific evidence both for and against neo-Darwinism. (Illing also contests whether there is indeed a scientific debate -- more about the evidence and controversy here.)

The text of the law includes the following statement:

This Section shall not be construed to promote any religious doctrine, promote discrimination for or against a particular set of religious beliefs, or promote discrimination for or against religion or nonreligion.

As I mentioned in a previous article on the LSEA:

Let's be clear: If a teacher presents creationism and is sued, the LSEA will offer that teacher no protection.... In any event, teaching creationism in public schools is unconstitutional according to the Supreme Court (Edwards v. Aguillard, 482 U.S. 578).

Louisiana's academic freedom law serves the purpose of giving teachers who would like to present both sides of the scientific controversy over evolution the freedom to do so without fear of retaliation. But media accounts often fail to portray this clearly.

Romancing the theory?


Beauty ≠ truth

Scientists prize elegant theories, but a taste for simplicity is a treacherous guide. And it doesn’t even look good
Albert Einstein's theory of general relativity is a century old next year and, as far as the test of time is concerned, it seems to have done rather well. For many, indeed, it doesn’t merely hold up: it is the archetype for what a scientific theory should look like. Einstein’s achievement was to explain gravity as a geometric phenomenon: a force that results from the distortion of space-time by matter and energy, compelling objects – and light itself – to move along particular paths, very much as rivers are constrained by the topography of their landscape. General relativity departs from classical Newtonian mechanics and from ordinary intuition alike, but its predictions have been verified countless times. In short, it is the business.
Einstein himself seemed rather indifferent to the experimental tests, however. The first came in 1919, when the British physicist Arthur Eddington observed the Sun’s gravity bending starlight during a solar eclipse. What if those results hadn’t agreed with the theory? (Some accuse Eddington of cherry-picking the figures anyway, but that’s another story.) ‘Then,’ said Einstein, ‘I would have been sorry for the dear Lord, for the theory is correct. 
This sort of talk both delights today’s physicists and makes them a little nervous. After all, isn’t experiment – nature itself – supposed to determine truth in science? What does beauty have to do with it? ‘Aesthetic judgments do not arbitrate scientific discourse,’ the string theorist Brian Greene reassures his readers in The Elegant Universe (1999), the most prominent work of physics exposition in recent years. ‘Ultimately, theories are judged by how they fare when faced with cold, hard, experimental facts.’ Einstein, Greene insists, didn’t mean to imply otherwise – he was just saying that beauty in a theory is a good guide, an indication that you are on the right track.That was Einstein all over. As the Danish physicist Niels Bohr commented at the time, he was a little too fond of telling God what to do. But this wasn’t sheer arrogance, nor parental pride in his theory. The reason Einstein felt general relativity must be right is that it was toobeautiful a theory to be wrong.
Einstein isn’t around to argue, of course, but I think he would have done. It was Einstein, after all, who said that ‘the only physical theories that we are willing to accept are the beautiful ones’. And if he was simply defending theory against too hasty a deference to experiment, there would be plenty of reason to side with him – for who is to say that, in case of a discrepancy, it must be the theory and not the measurement that is in error? But that’s not really his point. Einstein seems to be asserting that beauty trumps experience come what may.
He wasn’t alone. Here’s the great German mathematician Hermann Weyl, who fled Nazi Germany to become a colleague of Einstein’s at the Institute of Advanced Studies in Princeton: ‘My work always tries to unite the true with the beautiful; but when I had to choose one or the other, I usually chose the beautiful.’ So much for John Keats’s ‘Beauty is truth, truth beauty.’ And so much, you might be tempted to conclude, for scientists’ devotion to truth: here were some of its greatest luminaries, pledging obedience to a different calling altogether.
Was this kind of talk perhaps just the spirit of the age, a product of fin de siècle romanticism? It would be nice to think so. In fact, the discourse about aesthetics in scientific ideas has never gone away. Even Lev Landau and Evgeny Lifshitz, in their seminal but pitilessly austere midcentury Course of Theoretical Physics, were prepared to call general relativity ‘probably the most beautiful of all existing theories’. Today, popularisers such as Greene are keen to make beauty a selling point of physics. Writing in this magazine last year, the quantum theorist Adrian Kent speculated that the very ugliness of certain modifications of quantum mechanics might count against their credibility. After all, he wrote, here was a field in which ‘elegance seems to be a surprisingly strong indicator of physical relevance’.
We have to ask: what is this beauty they keep talking about?
S
ome scientists are a little coy about that. The Nobel Prize-winning physicist Paul Dirac agreed with Einstein, saying in 1963 that ‘it is more important to have beauty in one’s equations than to have them fit experiment’ (how might Greene explain that away?). Yet faced with the question of what this all-important beauty is, Dirac threw up his hands. Mathematical beauty, he said, ‘cannot be defined any more than beauty in art can be defined’ – though he added that it was something ‘people who study mathematics usually have no difficulty in appreciating’. That sounds rather close to the ‘good taste’ of his contemporaneous art critics; we might fear that it amounts to the same mixture of prejudice and paternalism.


Given this history of evasion, it was refreshing last November to hear the theoretical physicist Nima Arkani-Hamed spell out what ‘beauty’ really means for him and his colleagues. He was talking to the novelist Ian McEwan at the Science Museum in London, during the opening of the museum’s exhibition on the Large Hadron Collider. ‘Ideas that we find beautiful,’ Arkani-Hamed explained, ‘are not a capricious aesthetic judgment’:
It’s not fashion, it’s not sociology. It’s not something that you might find beautiful today but won’t find beautiful 10 years from now. The things that we find beautiful today we suspect would be beautiful for all eternity. And the reason is, what we mean by beauty is really a shorthand for something else. The laws that we find describe nature somehow have a sense of inevitability about them. There are very few principles and there’s no possible other way they could work once you understand them deeply enough. So that’s what we mean when we say ideas are beautiful.
Does this bear any relation to what beauty means in the arts? Arkani-Hamed had a shot at that. Take Ludwig van Beethoven, he said, who strove to develop his Fifth Symphony in ‘perfect accordance to its internal logical structure’.
It is precisely this that delights mathematicians in a great proof: not that it is correct but that it shows a tangibly human genius
Beethoven is indeed renowned for the way he tried out endless variations and directions in his music, turning his manuscripts into inky thickets in his search for the ‘right’ path. Novelists and poets, too, can be obsessive in their pursuit of the mot juste. Reading the novels of Patrick White or the late works of Penelope Fitzgerald, you get the same feeling of almost logical necessity, word by perfect word.
But you notice this quality precisely because it is so rare. What generally brings a work of art alive is not its inevitability so much as the decisions that the artist made. We gasp not because the words, the notes, the brushstrokes are ‘right’, but because they are revelatory: they show us not a deterministic process but a sensitive mind making surprising and delightful choices. In fact, pure mathematicians often say that it is precisely this quality that delights them in a great proof: not that it is correct but that it shows a personal, tangibly human genius taking steps in a direction we’d never have guessed.
‘The things that we find beautiful today we suspect would be beautiful for all eternity’: here is where Arkani-Hamed really scuppers the notion that the kind of beauty sought by science has anything to do with the major currents of artistic culture. After all, if there’s one thing you can say about beauty, it is that the beholder has a lot to do with it. We can still find beauty in the Paleolithic paintings at Lascaux and the music of William Byrd, while admitting that a heck of a lot of beauty really is fashion and sociology. Why shouldn’t it be? How couldn’t it be? We still swoon at Jan van Eyck. Would van Eyck’s audience swoon at Mark Rothko?
T
he gravest offenders in this attempted redefinition of beauty are, of course, the physicists. This is partly because their field has always been heir to Platonism – the mystical conviction of an orderly cosmos. Such a belief is almost a precondition for doing physics in the first place: what’s the point in looking for rules unless you believe they exist? The MIT physicist Max Tegmark now goes so far as to say that mathematics constitutes the basic fabric of reality, a claim redolent of Plato’s most extreme assertions in Timaeus.


But Platonism will not connect you with the mainstream of aesthetic thought – not least because Plato himself was so distrustful of art (he banned the lying poets from his Republic, after all). Better that we turn to Immanuel Kant. Kant expended considerable energies in hisCritique of Judgment (1790) trying to disentangle the aesthetic aspects of beauty from the satisfaction one feels in grasping an idea or recognising a form, and it does us little good to jumble them up again. All that conceptual understanding gives us, he concluded, is ‘the solution that satisfies the problem… not a free and indeterminately final entertainment of the mental powers with what is called beautiful’. Beauty, in other words, is not a resolution: it opens the imagination.
Physicists might be the furthest gone along Plato’s trail, but they are not alone. Consider the many chemists whose idea of beauty seems to be dictated primarily by the molecules they find pleasing – usually because of some inherent mathematical symmetry, such as in the football-shaped carbon molecule buckminsterfullerene (strictly speaking, a truncated icosahedron). Of course, this is just another instance of mathematics-worship, yoking beauty to qualities of regularity that were not deemed artistically beautiful even in antiquity. Brian Greene claims: ‘In physics, as in art, symmetry is a key part of aesthetics.’ Yet for Plato it was precisely art’s lack of symmetry (and thus intelligibility) that denied it access to real beauty. Art was just toomessy to be beautiful.
In seeing matters the other way around, Kant speaks for the mainstream of artistic aesthetics: ‘All stiff regularity (such as approximates to mathematical regularity) has something in it repugnant to taste.’ We weary of it, as we do a nursery rhyme. Or as the art historian Ernst Gombrich put it in 1988, too much symmetry ensures that ‘once we have grasped the principle of order… it holds no more surprise’. Artistic beauty, Gombrich believed, relies on a tension between symmetry and asymmetry: ‘a struggle between two opponents of equal power, the formless chaos, on which we impose our ideas, and the all-too-formed monotony, which we brighten up by new accents’. Even Francis Bacon (the 17th-century proto-scientist, not the 20th-century artist) understood this much: ‘There is no excellent beauty that hath not some strangeness in the proportion.’
Perhaps I have been a little harsh on the chemists – those cube- and prism-shaped molecules are fun in their own way. But Bacon, Kant and Gombrich are surely right to question their aesthetic merit. As the philosopher of chemistry Joachim Schummer pointed out in 2003, it is simply parochial to redefine beauty as symmetry: doing so cuts one off from the dominant tradition in artistic theory. There’s a reason why our galleries are not, on the whole, filled with paintings of perfect spheres.
W
hy shouldn’t scientists be allowed their own definition of beauty? Perhaps they should. Yet isn’t there a narrowness to the standard that they have chosen? Even that might not be so bad, if their cult of ‘beauty’ didn’t seem to undermine the credibility of what they otherwise so strenuously assert: the sanctity of evidence. It doesn’t matter who you are, they say, how famous or erudite or well-published: if your theory doesn’t match up to nature, it’s history. But if that’s the name of the game, why on earth should some vague notion of beauty be brought into play as an additional arbiter?


Because of experience, they might reply: true theories are beautiful. Well, general relativity might have turned out OK, but plenty of others have not. Take the four-colour theorem: the proposal that it is possible to colour any arbitrary patchwork in just four colours without any patches of the same colour touching one another. In 1879 it seemed as though the British mathematician Alfred Kempe had found a proof – and it was widely accepted for a decade, because it was thought beautiful. It was wrong. The current proof is ugly as heck – it relies on a brute-force exhaustive computer search, which some mathematicians refuse to accept as a valid form of demonstration – but it might turn out to be all there is. The same goes for Andrew Wiles’s proof of Fermat’s Last Theorem, first announced in 1993. The basic theorem is wonderfully simple and elegant, the proof anything but: 100 pages long and more complex than the Pompidou Centre. There’s no sign of anything simpler.
It’s not hard to mine science history for theories and proofs that were beautiful and wrong, or complicated and right. No one has ever shown a correlation between beauty and ‘truth’. But it is worse than that, for sometimes ‘beauty’ in the sense that many scientists prefer – an elegant simplicity, to put it in crude terms – can act as a fake trump card that deflects inquiry. In one little corner of science that I can claim to know reasonably well, an explanation from 1959 for why water-repelling particles attract when immersed in water (that it’s an effect of entropy, there being more disordered water molecules when the particles stick together) was so neat and satisfying that it continues to be peddled today, even though the experimental data show that it is untenable and that the real explanation probably lies in a lot of devilish detail.
I would be thrilled if the artist were to say to the scientist: ‘No, we’re not even on the same page’
Might it even be that the marvellous simplicity and power of natural selection strikes some biologists as so beautiful an idea – an island of order in a field otherwise beset with caveats and contradictions – that it must be defended at any cost? Why else would attempts to expose its limitations, exceptions and compromises still ignite disputes pursued with near-religious fervour?
The idea that simplicity, as distinct from beauty, is a guide to truth – the idea, in other words, that Occam’s Razor is a useful tool – seems like something of a shibboleth in itself. As these examples show, it is not reliably correct. Perhaps it is a logical assumption, all else being equal. But it is rare in science that all else is equal. More often, some experiments support one theory and others another, with no yardstick of parsimony to act as referee.
We can be sure, however, that simplicity is not the ultimate desideratum of aesthetic merit. Indeed, in music and visual art, there appears to be an optimal level of complexity below which preference declines. A graph of enjoyment versus complexity has the shape of an inverted U: there is a general preference for, say, ‘Eleanor Rigby’ over both ‘Baa Baa Black Sheep’ and Pierre Boulez’s Structures Ia, just as there is for lush landscapes over monochromes. For most of us, our tastes eschew the extremes.
I
ronically, the quest for a ‘final theory’ of nature’s deepest physical laws has meant that the inevitability and simplicity that Arkani-Hamed prizes so highly now look more remote than ever. For we are now forced to contemplate no fewer than 10500 permissible variants of string theory. It’s always possible that 10500 minus one of them might vanish at a stroke, thanks to the insight of some future genius. Right now, though, the dream of elegant fundamental laws lies in bewildering disarray.


An insistence that the ‘beautiful’ must be true all too easily elides into an empty circularity: what is true must therefore be beautiful. I see this in the conviction of many chemists that the periodic table, with all its backtracking sequences of electron shells, its positional ambiguities for elements such as hydrogen and unsightly bulges that the flat page can’t constrain, is a thing of loveliness. There, surely, speaks the voice of duty, not genuine feeling. The search for an ideal, perfect Platonic form of the table amid spirals, hypercubes and pyramids has an air of desperation.
Despite all this, I don’t want scientists to abandon their talk of beauty. Anything that inspires scientific thinking is valuable, and if a quest for beauty – a notion of beauty peculiar to science, removed from art – does that, then bring it on. And if it gives them a language in which to converse with artists, rather than standing on soapboxes and trading magisterial insults like C P Snow and F R Leavis, all the better. I just wish they could be a bit more upfront about the fact that they are (as is their wont) torturing a poor, fuzzy, everyday word to make it fit their own requirements. I would be rather thrilled if the artist, rather than accepting this unified pursuit of beauty (as Ian McEwan did), were to say instead: ‘No, we’re not even on the same page. This beauty of yours means nothing to me.’
If, on the other hand, we want beauty in science to make contact with aesthetics in art, I believe we should seek it precisely in the human aspect: in ingenious experimental design, elegance of theoretical logic, gentle clarity of exposition, imaginative leaps of reasoning. These things are not vital for a theory that works, an experiment that succeeds, an explanation that enchants and enlightens. But they are rather lovely. Beauty, unlike truth or nature, is something we make ourselves.


Philip Ball will be appearing in London on July 7 to talk about this article. Discounted tickets are available for Aeon readers. To be notified when tickets go on sale click here. This event is organised by The Browser in association with Aeon and Prospect Magazine.

Saturday 12 January 2019

Neodarwinism's star witness defects.

Genetics and Epigenetics — New Problems for Darwinism
Evolution News @DiscoveryCSC


New findings in genetics and epigenetics are creating new problems for evolution. The simplistic version of neo-Darwinism expects all variation come from genetic mutations, which nature selects for fitness. Non-coding DNA was relegated to the junk pile — trash left over from natural selection, which favors DNA that codes for proteins. In a notion called subfunctionalization, copies of genes might be free to mutate and become new proteins, or decay into “pseudogenes,” one type of junk DNA. As usual, simplistic theories are often wrong. 

How Many Genes?

The Human Genome Project ended with a surprisingly low number of genes. But what if they missed some Researchers at Yale have been finding genes that were misidentified as non-protein coding due to the methods previous researchers used to annotate them. One of the newly identified genes, they say, plays a key role in the immune system. Are there others? 

The findings suggest many more protein-coding genes and functions may be discovered. “A large portion of important protein-coding genes have been missed by virtue of their annotation,” said first author Ruaidhri Jackson. Without vetting and identifying these genes, “we can’t fully understand the protein-coding genome or adequately screen genes for health and disease purposes.” 

The first sentence of their paper in Nature says, “The annotation of the mammalian protein-coding genome is incomplete.” They have identified a “large number of RNAs that were previously annotated as ‘non-protein coding,’” some of which are “potentially important transcripts” able to make protein. Restrictive methods in the past “may obscure the essential role of a multitude of previously undiscovered protein-coding genes.”

Epigenetics in Archaea

Does epigenetic inheritance and regulation work only in eukaryotes? No. Scientists at the University of Nebraska-Lincoln discovered that members of the “simple” kingdom of Archaea also have it. They watched microbes inherit extreme acid resistance in Yellowstone hot springs not through genetics, but through epigenetics.
    “The surprise is that it’s in these relatively primitive organisms, which we know to be ancient,” said Blum, Charles Bessey Professor of Biological Sciences at Nebraska. “We’ve been thinking about this as something (evolutionarily) new. But epigenetics is not a newcomer to the planet.”

The discovery “raises questions … about how both eukaryotes and archaea came to adopt epigenetics as a method of inheritance.” Now they have to confront whether an even earlier common ancestor had it, or whether it evolved twice. “It’s a really interesting concept from an evolutionary perspective,” said a doctoral student involved in the research. Critics of neo-Darwinism might describe those alternatives differently from just “interesting.” Ridiculous, perhaps, or falsifying.

Epigenetics in Plants

Briefly, a paper in  PNAS finds that “Partial maintenance of organ-specific epigenetic marks during plant asexual reproduction leads to heritable phenotypic variation.” Why do clones, with identical genomes, differ? The answer is epigenetics. 
   We found that phenotypic novelty in clonal progeny was linked to epigenetic imprints that reflect the organ used for regeneration. Some of these organ-specific imprints can be maintained during the cloning process and subsequent rounds of meiosis. Our findings are fundamental for understanding the significance of epigenetic variability arising from asexual reproduction and have significant implications for future biotechnological applications.
    
Non-Genetic Order

Here’s a cellular phenomenon that really is interesting, because it reveals a newly discovered structural order in the cell membrane. This structural order surely is inherited somehow, but may have little to do with genes. Biochemists had thought for a century that the inner space in the membrane is fluid and disordered, but techniques to probe that space have been difficult because the detergents used disrupt the membrane. Now, researchers at Virginia Commonwealth University, in conjunction with Nobel laureate Joachim Frank, used a new method without detergents. They were surprised — no, startled — to find an orderly hexagonal 3-D structure between the molecules in the lipid bilayer. Is there a reason for this orderly structure?
     Where earlier models had shown a fluid, almost structureless lipid layer — one often-cited research paper compared it to different weights of olive oil poured together — the VCU-led team was startled to find a distinct hexagonal structure inside the membrane. This led the researchers to propose that the lipid layer might act as both sensor and energy transducer within a membrane-protein transporter.

“The most surprising outcome is the high order with which lipid molecules are arranged, and the idea they might even cooperate in the functional cycle of the export channel,” said Joachim Frank, Ph.D., of Columbia University, a 2017 Nobel laureate in chemistry and co-author of the paper. “It is counterintuitive since we have learned that lipids are fluid and disordered in the membrane.”
    Their paper in PNAS says nothing about genetics, so maybe this comes about through physical interactions of the lipids and the protein channels. Whatever causes this orderly arrangement, it appears to interact with transmembrane channels, adapting to the conformational changes of the proteins, particularly a transporter called AcrB. Without the hexagonal mesh around the channel, and just a disordered fluid, the channels action might be less efficient, like a boxer without a sparring partner beating the air. Not only that, the hexagonal mesh also transmits the channel’s activity down the membrane to its neighbors. Fascinating!
       Through defined protein contacts, the lipid bilayer senses the conformational changes that occur in each TM [transmembrane] domain and then transduces effects of these changes through the lipid bilayer to neighboring protomers in a viscous interplay between cavity lipids and the AcrB trimer.
                 
Another Blow to the Central Dogma

Mauro Modesti gives his perspective on a new finding in Science, “A pinch of RNA spices up DNA repair.” The Central Dogma of genetics that views DNA as the master molecule controlling everything downstream, with no feedback, has been suffering since it was first taught the 1960s. In the same issue of Science, a paper reveals that RNA plays an essential role in DNA repair. What does this mean? Modesti explains,
                   Pryor et al. report the surprising discovery that ribonucleotides are frequently incorporated at broken DNA ends, which enhances repair. This important finding overturns the central dogma of molecular biology by demonstrating that transient incorporation of ribonucleotides in DNA has a biological function.

Genetic Determinism Lives On

The idea that humans are pawns of their genes has a long history, mostly negative. Genetic determinism undermines free will and character, giving people something physical to blame for their problems. Materialists continue the bad habit, though, as shown in this paper in Nature Scientific Reports, “A genetic perspective on the relationship between eudaimonic –and hedonic well-being.” The news from the University of Amsterdam puts it bluntly: “Discovery of first genetic variants associated with meaning in life.” But can something as psychological or even spiritual be reduced to genes? 

They checked DNA samples of 220,000 individuals, and had them answer a questionnaire. The genetic variants, they say, “are mainly expressed in the central nervous system, showing the involvement of different brain areas.” 

“These results show that genetic differences between people not only play a role in differences in happiness, but also in differences for in meaning in life. By a meaning in life, we mean the search for meaning or purpose of life.”

Did these researchers ever learn that correlation is not causation? Did they inspect their own genes? Did they answer a questionnaire, saying that they felt eudaimonia when proposing genetic determinism? Did their genes determine their own philosophy of mind? If so, then how can anyone believe them? What are universities teaching scientists these days?

Simplistic notions of neo-Darwinism seemed more plausible before new techniques uncovered the evidence of splendid design going on in cells. If the trend continues, 2019 will be a great year for intelligent design.