Search This Blog

Saturday 19 January 2019

Romancing the theory?


Beauty ≠ truth

Scientists prize elegant theories, but a taste for simplicity is a treacherous guide. And it doesn’t even look good
Albert Einstein's theory of general relativity is a century old next year and, as far as the test of time is concerned, it seems to have done rather well. For many, indeed, it doesn’t merely hold up: it is the archetype for what a scientific theory should look like. Einstein’s achievement was to explain gravity as a geometric phenomenon: a force that results from the distortion of space-time by matter and energy, compelling objects – and light itself – to move along particular paths, very much as rivers are constrained by the topography of their landscape. General relativity departs from classical Newtonian mechanics and from ordinary intuition alike, but its predictions have been verified countless times. In short, it is the business.
Einstein himself seemed rather indifferent to the experimental tests, however. The first came in 1919, when the British physicist Arthur Eddington observed the Sun’s gravity bending starlight during a solar eclipse. What if those results hadn’t agreed with the theory? (Some accuse Eddington of cherry-picking the figures anyway, but that’s another story.) ‘Then,’ said Einstein, ‘I would have been sorry for the dear Lord, for the theory is correct. 
This sort of talk both delights today’s physicists and makes them a little nervous. After all, isn’t experiment – nature itself – supposed to determine truth in science? What does beauty have to do with it? ‘Aesthetic judgments do not arbitrate scientific discourse,’ the string theorist Brian Greene reassures his readers in The Elegant Universe (1999), the most prominent work of physics exposition in recent years. ‘Ultimately, theories are judged by how they fare when faced with cold, hard, experimental facts.’ Einstein, Greene insists, didn’t mean to imply otherwise – he was just saying that beauty in a theory is a good guide, an indication that you are on the right track.That was Einstein all over. As the Danish physicist Niels Bohr commented at the time, he was a little too fond of telling God what to do. But this wasn’t sheer arrogance, nor parental pride in his theory. The reason Einstein felt general relativity must be right is that it was toobeautiful a theory to be wrong.
Einstein isn’t around to argue, of course, but I think he would have done. It was Einstein, after all, who said that ‘the only physical theories that we are willing to accept are the beautiful ones’. And if he was simply defending theory against too hasty a deference to experiment, there would be plenty of reason to side with him – for who is to say that, in case of a discrepancy, it must be the theory and not the measurement that is in error? But that’s not really his point. Einstein seems to be asserting that beauty trumps experience come what may.
He wasn’t alone. Here’s the great German mathematician Hermann Weyl, who fled Nazi Germany to become a colleague of Einstein’s at the Institute of Advanced Studies in Princeton: ‘My work always tries to unite the true with the beautiful; but when I had to choose one or the other, I usually chose the beautiful.’ So much for John Keats’s ‘Beauty is truth, truth beauty.’ And so much, you might be tempted to conclude, for scientists’ devotion to truth: here were some of its greatest luminaries, pledging obedience to a different calling altogether.
Was this kind of talk perhaps just the spirit of the age, a product of fin de siècle romanticism? It would be nice to think so. In fact, the discourse about aesthetics in scientific ideas has never gone away. Even Lev Landau and Evgeny Lifshitz, in their seminal but pitilessly austere midcentury Course of Theoretical Physics, were prepared to call general relativity ‘probably the most beautiful of all existing theories’. Today, popularisers such as Greene are keen to make beauty a selling point of physics. Writing in this magazine last year, the quantum theorist Adrian Kent speculated that the very ugliness of certain modifications of quantum mechanics might count against their credibility. After all, he wrote, here was a field in which ‘elegance seems to be a surprisingly strong indicator of physical relevance’.
We have to ask: what is this beauty they keep talking about?
S
ome scientists are a little coy about that. The Nobel Prize-winning physicist Paul Dirac agreed with Einstein, saying in 1963 that ‘it is more important to have beauty in one’s equations than to have them fit experiment’ (how might Greene explain that away?). Yet faced with the question of what this all-important beauty is, Dirac threw up his hands. Mathematical beauty, he said, ‘cannot be defined any more than beauty in art can be defined’ – though he added that it was something ‘people who study mathematics usually have no difficulty in appreciating’. That sounds rather close to the ‘good taste’ of his contemporaneous art critics; we might fear that it amounts to the same mixture of prejudice and paternalism.


Given this history of evasion, it was refreshing last November to hear the theoretical physicist Nima Arkani-Hamed spell out what ‘beauty’ really means for him and his colleagues. He was talking to the novelist Ian McEwan at the Science Museum in London, during the opening of the museum’s exhibition on the Large Hadron Collider. ‘Ideas that we find beautiful,’ Arkani-Hamed explained, ‘are not a capricious aesthetic judgment’:
It’s not fashion, it’s not sociology. It’s not something that you might find beautiful today but won’t find beautiful 10 years from now. The things that we find beautiful today we suspect would be beautiful for all eternity. And the reason is, what we mean by beauty is really a shorthand for something else. The laws that we find describe nature somehow have a sense of inevitability about them. There are very few principles and there’s no possible other way they could work once you understand them deeply enough. So that’s what we mean when we say ideas are beautiful.
Does this bear any relation to what beauty means in the arts? Arkani-Hamed had a shot at that. Take Ludwig van Beethoven, he said, who strove to develop his Fifth Symphony in ‘perfect accordance to its internal logical structure’.
It is precisely this that delights mathematicians in a great proof: not that it is correct but that it shows a tangibly human genius
Beethoven is indeed renowned for the way he tried out endless variations and directions in his music, turning his manuscripts into inky thickets in his search for the ‘right’ path. Novelists and poets, too, can be obsessive in their pursuit of the mot juste. Reading the novels of Patrick White or the late works of Penelope Fitzgerald, you get the same feeling of almost logical necessity, word by perfect word.
But you notice this quality precisely because it is so rare. What generally brings a work of art alive is not its inevitability so much as the decisions that the artist made. We gasp not because the words, the notes, the brushstrokes are ‘right’, but because they are revelatory: they show us not a deterministic process but a sensitive mind making surprising and delightful choices. In fact, pure mathematicians often say that it is precisely this quality that delights them in a great proof: not that it is correct but that it shows a personal, tangibly human genius taking steps in a direction we’d never have guessed.
‘The things that we find beautiful today we suspect would be beautiful for all eternity’: here is where Arkani-Hamed really scuppers the notion that the kind of beauty sought by science has anything to do with the major currents of artistic culture. After all, if there’s one thing you can say about beauty, it is that the beholder has a lot to do with it. We can still find beauty in the Paleolithic paintings at Lascaux and the music of William Byrd, while admitting that a heck of a lot of beauty really is fashion and sociology. Why shouldn’t it be? How couldn’t it be? We still swoon at Jan van Eyck. Would van Eyck’s audience swoon at Mark Rothko?
T
he gravest offenders in this attempted redefinition of beauty are, of course, the physicists. This is partly because their field has always been heir to Platonism – the mystical conviction of an orderly cosmos. Such a belief is almost a precondition for doing physics in the first place: what’s the point in looking for rules unless you believe they exist? The MIT physicist Max Tegmark now goes so far as to say that mathematics constitutes the basic fabric of reality, a claim redolent of Plato’s most extreme assertions in Timaeus.


But Platonism will not connect you with the mainstream of aesthetic thought – not least because Plato himself was so distrustful of art (he banned the lying poets from his Republic, after all). Better that we turn to Immanuel Kant. Kant expended considerable energies in hisCritique of Judgment (1790) trying to disentangle the aesthetic aspects of beauty from the satisfaction one feels in grasping an idea or recognising a form, and it does us little good to jumble them up again. All that conceptual understanding gives us, he concluded, is ‘the solution that satisfies the problem… not a free and indeterminately final entertainment of the mental powers with what is called beautiful’. Beauty, in other words, is not a resolution: it opens the imagination.
Physicists might be the furthest gone along Plato’s trail, but they are not alone. Consider the many chemists whose idea of beauty seems to be dictated primarily by the molecules they find pleasing – usually because of some inherent mathematical symmetry, such as in the football-shaped carbon molecule buckminsterfullerene (strictly speaking, a truncated icosahedron). Of course, this is just another instance of mathematics-worship, yoking beauty to qualities of regularity that were not deemed artistically beautiful even in antiquity. Brian Greene claims: ‘In physics, as in art, symmetry is a key part of aesthetics.’ Yet for Plato it was precisely art’s lack of symmetry (and thus intelligibility) that denied it access to real beauty. Art was just toomessy to be beautiful.
In seeing matters the other way around, Kant speaks for the mainstream of artistic aesthetics: ‘All stiff regularity (such as approximates to mathematical regularity) has something in it repugnant to taste.’ We weary of it, as we do a nursery rhyme. Or as the art historian Ernst Gombrich put it in 1988, too much symmetry ensures that ‘once we have grasped the principle of order… it holds no more surprise’. Artistic beauty, Gombrich believed, relies on a tension between symmetry and asymmetry: ‘a struggle between two opponents of equal power, the formless chaos, on which we impose our ideas, and the all-too-formed monotony, which we brighten up by new accents’. Even Francis Bacon (the 17th-century proto-scientist, not the 20th-century artist) understood this much: ‘There is no excellent beauty that hath not some strangeness in the proportion.’
Perhaps I have been a little harsh on the chemists – those cube- and prism-shaped molecules are fun in their own way. But Bacon, Kant and Gombrich are surely right to question their aesthetic merit. As the philosopher of chemistry Joachim Schummer pointed out in 2003, it is simply parochial to redefine beauty as symmetry: doing so cuts one off from the dominant tradition in artistic theory. There’s a reason why our galleries are not, on the whole, filled with paintings of perfect spheres.
W
hy shouldn’t scientists be allowed their own definition of beauty? Perhaps they should. Yet isn’t there a narrowness to the standard that they have chosen? Even that might not be so bad, if their cult of ‘beauty’ didn’t seem to undermine the credibility of what they otherwise so strenuously assert: the sanctity of evidence. It doesn’t matter who you are, they say, how famous or erudite or well-published: if your theory doesn’t match up to nature, it’s history. But if that’s the name of the game, why on earth should some vague notion of beauty be brought into play as an additional arbiter?


Because of experience, they might reply: true theories are beautiful. Well, general relativity might have turned out OK, but plenty of others have not. Take the four-colour theorem: the proposal that it is possible to colour any arbitrary patchwork in just four colours without any patches of the same colour touching one another. In 1879 it seemed as though the British mathematician Alfred Kempe had found a proof – and it was widely accepted for a decade, because it was thought beautiful. It was wrong. The current proof is ugly as heck – it relies on a brute-force exhaustive computer search, which some mathematicians refuse to accept as a valid form of demonstration – but it might turn out to be all there is. The same goes for Andrew Wiles’s proof of Fermat’s Last Theorem, first announced in 1993. The basic theorem is wonderfully simple and elegant, the proof anything but: 100 pages long and more complex than the Pompidou Centre. There’s no sign of anything simpler.
It’s not hard to mine science history for theories and proofs that were beautiful and wrong, or complicated and right. No one has ever shown a correlation between beauty and ‘truth’. But it is worse than that, for sometimes ‘beauty’ in the sense that many scientists prefer – an elegant simplicity, to put it in crude terms – can act as a fake trump card that deflects inquiry. In one little corner of science that I can claim to know reasonably well, an explanation from 1959 for why water-repelling particles attract when immersed in water (that it’s an effect of entropy, there being more disordered water molecules when the particles stick together) was so neat and satisfying that it continues to be peddled today, even though the experimental data show that it is untenable and that the real explanation probably lies in a lot of devilish detail.
I would be thrilled if the artist were to say to the scientist: ‘No, we’re not even on the same page’
Might it even be that the marvellous simplicity and power of natural selection strikes some biologists as so beautiful an idea – an island of order in a field otherwise beset with caveats and contradictions – that it must be defended at any cost? Why else would attempts to expose its limitations, exceptions and compromises still ignite disputes pursued with near-religious fervour?
The idea that simplicity, as distinct from beauty, is a guide to truth – the idea, in other words, that Occam’s Razor is a useful tool – seems like something of a shibboleth in itself. As these examples show, it is not reliably correct. Perhaps it is a logical assumption, all else being equal. But it is rare in science that all else is equal. More often, some experiments support one theory and others another, with no yardstick of parsimony to act as referee.
We can be sure, however, that simplicity is not the ultimate desideratum of aesthetic merit. Indeed, in music and visual art, there appears to be an optimal level of complexity below which preference declines. A graph of enjoyment versus complexity has the shape of an inverted U: there is a general preference for, say, ‘Eleanor Rigby’ over both ‘Baa Baa Black Sheep’ and Pierre Boulez’s Structures Ia, just as there is for lush landscapes over monochromes. For most of us, our tastes eschew the extremes.
I
ronically, the quest for a ‘final theory’ of nature’s deepest physical laws has meant that the inevitability and simplicity that Arkani-Hamed prizes so highly now look more remote than ever. For we are now forced to contemplate no fewer than 10500 permissible variants of string theory. It’s always possible that 10500 minus one of them might vanish at a stroke, thanks to the insight of some future genius. Right now, though, the dream of elegant fundamental laws lies in bewildering disarray.


An insistence that the ‘beautiful’ must be true all too easily elides into an empty circularity: what is true must therefore be beautiful. I see this in the conviction of many chemists that the periodic table, with all its backtracking sequences of electron shells, its positional ambiguities for elements such as hydrogen and unsightly bulges that the flat page can’t constrain, is a thing of loveliness. There, surely, speaks the voice of duty, not genuine feeling. The search for an ideal, perfect Platonic form of the table amid spirals, hypercubes and pyramids has an air of desperation.
Despite all this, I don’t want scientists to abandon their talk of beauty. Anything that inspires scientific thinking is valuable, and if a quest for beauty – a notion of beauty peculiar to science, removed from art – does that, then bring it on. And if it gives them a language in which to converse with artists, rather than standing on soapboxes and trading magisterial insults like C P Snow and F R Leavis, all the better. I just wish they could be a bit more upfront about the fact that they are (as is their wont) torturing a poor, fuzzy, everyday word to make it fit their own requirements. I would be rather thrilled if the artist, rather than accepting this unified pursuit of beauty (as Ian McEwan did), were to say instead: ‘No, we’re not even on the same page. This beauty of yours means nothing to me.’
If, on the other hand, we want beauty in science to make contact with aesthetics in art, I believe we should seek it precisely in the human aspect: in ingenious experimental design, elegance of theoretical logic, gentle clarity of exposition, imaginative leaps of reasoning. These things are not vital for a theory that works, an experiment that succeeds, an explanation that enchants and enlightens. But they are rather lovely. Beauty, unlike truth or nature, is something we make ourselves.


Philip Ball will be appearing in London on July 7 to talk about this article. Discounted tickets are available for Aeon readers. To be notified when tickets go on sale click here. This event is organised by The Browser in association with Aeon and Prospect Magazine.

Saturday 12 January 2019

Neodarwinism's star witness defects.

Genetics and Epigenetics — New Problems for Darwinism
Evolution News @DiscoveryCSC


New findings in genetics and epigenetics are creating new problems for evolution. The simplistic version of neo-Darwinism expects all variation come from genetic mutations, which nature selects for fitness. Non-coding DNA was relegated to the junk pile — trash left over from natural selection, which favors DNA that codes for proteins. In a notion called subfunctionalization, copies of genes might be free to mutate and become new proteins, or decay into “pseudogenes,” one type of junk DNA. As usual, simplistic theories are often wrong. 

How Many Genes?

The Human Genome Project ended with a surprisingly low number of genes. But what if they missed some Researchers at Yale have been finding genes that were misidentified as non-protein coding due to the methods previous researchers used to annotate them. One of the newly identified genes, they say, plays a key role in the immune system. Are there others? 

The findings suggest many more protein-coding genes and functions may be discovered. “A large portion of important protein-coding genes have been missed by virtue of their annotation,” said first author Ruaidhri Jackson. Without vetting and identifying these genes, “we can’t fully understand the protein-coding genome or adequately screen genes for health and disease purposes.” 

The first sentence of their paper in Nature says, “The annotation of the mammalian protein-coding genome is incomplete.” They have identified a “large number of RNAs that were previously annotated as ‘non-protein coding,’” some of which are “potentially important transcripts” able to make protein. Restrictive methods in the past “may obscure the essential role of a multitude of previously undiscovered protein-coding genes.”

Epigenetics in Archaea

Does epigenetic inheritance and regulation work only in eukaryotes? No. Scientists at the University of Nebraska-Lincoln discovered that members of the “simple” kingdom of Archaea also have it. They watched microbes inherit extreme acid resistance in Yellowstone hot springs not through genetics, but through epigenetics.
    “The surprise is that it’s in these relatively primitive organisms, which we know to be ancient,” said Blum, Charles Bessey Professor of Biological Sciences at Nebraska. “We’ve been thinking about this as something (evolutionarily) new. But epigenetics is not a newcomer to the planet.”

The discovery “raises questions … about how both eukaryotes and archaea came to adopt epigenetics as a method of inheritance.” Now they have to confront whether an even earlier common ancestor had it, or whether it evolved twice. “It’s a really interesting concept from an evolutionary perspective,” said a doctoral student involved in the research. Critics of neo-Darwinism might describe those alternatives differently from just “interesting.” Ridiculous, perhaps, or falsifying.

Epigenetics in Plants

Briefly, a paper in  PNAS finds that “Partial maintenance of organ-specific epigenetic marks during plant asexual reproduction leads to heritable phenotypic variation.” Why do clones, with identical genomes, differ? The answer is epigenetics. 
   We found that phenotypic novelty in clonal progeny was linked to epigenetic imprints that reflect the organ used for regeneration. Some of these organ-specific imprints can be maintained during the cloning process and subsequent rounds of meiosis. Our findings are fundamental for understanding the significance of epigenetic variability arising from asexual reproduction and have significant implications for future biotechnological applications.
    
Non-Genetic Order

Here’s a cellular phenomenon that really is interesting, because it reveals a newly discovered structural order in the cell membrane. This structural order surely is inherited somehow, but may have little to do with genes. Biochemists had thought for a century that the inner space in the membrane is fluid and disordered, but techniques to probe that space have been difficult because the detergents used disrupt the membrane. Now, researchers at Virginia Commonwealth University, in conjunction with Nobel laureate Joachim Frank, used a new method without detergents. They were surprised — no, startled — to find an orderly hexagonal 3-D structure between the molecules in the lipid bilayer. Is there a reason for this orderly structure?
     Where earlier models had shown a fluid, almost structureless lipid layer — one often-cited research paper compared it to different weights of olive oil poured together — the VCU-led team was startled to find a distinct hexagonal structure inside the membrane. This led the researchers to propose that the lipid layer might act as both sensor and energy transducer within a membrane-protein transporter.

“The most surprising outcome is the high order with which lipid molecules are arranged, and the idea they might even cooperate in the functional cycle of the export channel,” said Joachim Frank, Ph.D., of Columbia University, a 2017 Nobel laureate in chemistry and co-author of the paper. “It is counterintuitive since we have learned that lipids are fluid and disordered in the membrane.”
    Their paper in PNAS says nothing about genetics, so maybe this comes about through physical interactions of the lipids and the protein channels. Whatever causes this orderly arrangement, it appears to interact with transmembrane channels, adapting to the conformational changes of the proteins, particularly a transporter called AcrB. Without the hexagonal mesh around the channel, and just a disordered fluid, the channels action might be less efficient, like a boxer without a sparring partner beating the air. Not only that, the hexagonal mesh also transmits the channel’s activity down the membrane to its neighbors. Fascinating!
       Through defined protein contacts, the lipid bilayer senses the conformational changes that occur in each TM [transmembrane] domain and then transduces effects of these changes through the lipid bilayer to neighboring protomers in a viscous interplay between cavity lipids and the AcrB trimer.
                 
Another Blow to the Central Dogma

Mauro Modesti gives his perspective on a new finding in Science, “A pinch of RNA spices up DNA repair.” The Central Dogma of genetics that views DNA as the master molecule controlling everything downstream, with no feedback, has been suffering since it was first taught the 1960s. In the same issue of Science, a paper reveals that RNA plays an essential role in DNA repair. What does this mean? Modesti explains,
                   Pryor et al. report the surprising discovery that ribonucleotides are frequently incorporated at broken DNA ends, which enhances repair. This important finding overturns the central dogma of molecular biology by demonstrating that transient incorporation of ribonucleotides in DNA has a biological function.

Genetic Determinism Lives On

The idea that humans are pawns of their genes has a long history, mostly negative. Genetic determinism undermines free will and character, giving people something physical to blame for their problems. Materialists continue the bad habit, though, as shown in this paper in Nature Scientific Reports, “A genetic perspective on the relationship between eudaimonic –and hedonic well-being.” The news from the University of Amsterdam puts it bluntly: “Discovery of first genetic variants associated with meaning in life.” But can something as psychological or even spiritual be reduced to genes? 

They checked DNA samples of 220,000 individuals, and had them answer a questionnaire. The genetic variants, they say, “are mainly expressed in the central nervous system, showing the involvement of different brain areas.” 

“These results show that genetic differences between people not only play a role in differences in happiness, but also in differences for in meaning in life. By a meaning in life, we mean the search for meaning or purpose of life.”

Did these researchers ever learn that correlation is not causation? Did they inspect their own genes? Did they answer a questionnaire, saying that they felt eudaimonia when proposing genetic determinism? Did their genes determine their own philosophy of mind? If so, then how can anyone believe them? What are universities teaching scientists these days?

Simplistic notions of neo-Darwinism seemed more plausible before new techniques uncovered the evidence of splendid design going on in cells. If the trend continues, 2019 will be a great year for intelligent design.

Sacred cows?

In Europe, Animal Rights Are Steamrolling Religious Freedom
Wesley J. Smith

As Western society secularizes, religious liberty is in danger of becoming passé. Increasingly, jurisdictions are enacting laws in furtherance of legitimate social considerations that, concomitantly, shrivel the freedom of religious believers to live according to their personal faith precepts.


Western Europe is leading the way. Belgium now requires all food animals be stunned before slaughter, which prevents their meat from being declared kosher or halal — hence edible — in accordance with the religious requirements of Judaism and Islam.

A Terrible Bind

Of course, such animal welfare laws are absolutely appropriate. But, until recently, comity was also preserved by allowing limited religious exemptions. Those accommodations are now systematically being removed. From the New York Times story:

Most countries and the European Union allow religious exceptions to the stunning requirement, though in some places — like the Netherlands, where a new law took effect last year, and Germany — the exceptions are very narrow. Belgium is joining Sweden, Norway, Iceland, Denmark and Slovenia among the nations that do not provide for any exceptions.

That puts observant Jews and Muslims in a terrible bind. They can have food shipped from elsewhere, which is more expensive. But what if other countries also ban such practices, or their home countries forbid the import of kosher or halal meat? Believers would be forced to choose between eating meat and violating their religious beliefs.

Some secularists would be just fine with that since these laws don’t infringe their own freedoms, while those who are anti-religious would delight in forcing such hard choices upon believers.

“The Law Is Above Religion”

Some non-religionists even presume to tell the faithful what their rules do and don’t require:
   Ann De Greef, director of Global Action in the Interest of Animals, a Belgian animal rights group, insisted that stunning does not conflict with kosher and halal doctrine, and “they could still consider it ritual slaughtering,” but the religious authorities refuse to accept that.

“They want to keep living in the Middle Ages and continue to slaughter without stunning — as the technique didn’t yet exist back then — without having to answer to the law,” she said. “Well, I’m sorry, in Belgium the law is above religion and that will stay like that.”

That kind of religious intolerance is only going to present in brighter hues going forward. There is great pressure, for example, to ban infant circumcision, a sacred and absolute requirement of Jews, also practiced as a religious duty by many Muslims. Efforts are also afoot to force doctors to participate in abortion and/or euthanasia — even when a doctor considers such acts to be a grievous sin materially impacting their own eternal destinies. I am sure readers can think of many other examples.

Freedom cannot be a one-way street. Steamrolling traditional believers’ faith values is a recipe for tearing society apart.

American healthcare on its deathbed?:Pros and cons.

How fish school humans on design.

Fish Teach Humans about Design
Evolution News & Views April 1, 2016 3:48 AM

Why do fish bob their heads back and forth as they swim? Is that wasted movement? Is it an inescapable consequence of undulatory motions during swimming? That's what many scientists used to think. What a team found out reminds us never to assume nature's methods are wasteful.

A new paper in Nature Communications summarizes the find: "Fish optimize sensing and respiration during undulatory swimming." That word optimize has design written all over it, especially when the fish optimizes three things at once:

Previous work in fishes considers undulation as a means of propulsion without addressing how it may affect other functions such as sensing and respiration. Here we show that undulation can optimize propulsion, flow sensing and respiration concurrently without any apparent tradeoffs when head movements are coupled correctly with the movements of the body. This finding challenges a long-held assumption that head movements are simply an unintended consequence of undulation, existing only because of the recoil of an oscillating tail. We use a combination of theoretical, biological and physical experiments to reveal the hydrodynamic mechanisms underlying this concerted optimization. [Emphasis added.]
Using a "bio-inspired physical model" with flow sensors, the team from Harvard and the University of Florida found that head bobbing actually improves swimming efficiency. Then they studied how the fish's lateral line sense improves with the resulting water flow. One might think that the extra motion of the head would confuse the lateral line sense, but the opposite is true.

We discovered that the motions associated with undulation can automatically enhance lateral line sensing on the head by minimizing self-generated stimuli. Fish move their heads in a way that minimizes pressure up to 50%, establishing a twofold greater sensitivity to an external stimulus than would otherwise be possible (Fig. 3a). At swimming speeds up to 2 L s−1, we found a heightened sensitivity around the anterior region of the head, which is where the majority of the encounters related to feeding and locomotion are initiated. We propose that during swimming, fish may not have to rely as extensively on the efferent system to distinguish between external and self-generated stimuli if they rotate their head in an appropriate phase with respect to side-to-side motion.
Simultaneously, this head motion increases the flow across the gills, enhancing respiration. This is the first time the coupling of motion with respiration has been demonstrated in fish like it has been with birds, horses and humans. Scientists used to view undulation and respiration as independent processes. No longer:

Here, we discover that fishes swimming with body undulations also show respiratory-locomotor coupling. Our pressure model reveals that undulation-generated pressures around the mouth and opercula oscillate dramatically. We found that fishes exploit these pressures by timing their respiratory movements accordingly, which likely minimizes the energetic cost of pumping the dense medium of water. High-speed, high-resolution video reveals that respiratory movements are tightly synchronized with head movements (Fig. 3b). When the pressure difference between the outside and inside of the mouth reaches 0.2 mm Hg, fishes open their mouth to allow water to flow in passively. Perhaps not coincidentally, this exact pressure difference is generated by the active buccal expansion of stationary fish. In this way, we hypothesize that swimming fishes exploit self-generated pressures to circumvent the work of buccal pumping.
This is really neat. The undulatory motion of swimming with the fins moves the head back and forth in phase such that the work of breathing is reduced, and the sensitivity of the lateral line is optimized. It's a three-for-one gain with no tradeoff in cost.

Life requires the successful, simultaneous execution of basic physiological functions. The coordination of these functions usually relies on distinct neural networks that run in parallel. Over the past several decades, a number of studies have demonstrated that the passive mechanical properties of the body can simplify individual functions, releasing them from the need for precise neural control. Here, we show that during aquatic axial undulation, head movements can allow seemingly disparate but fundamental functions to be coordinated simultaneously without tradeoffs.
Isn't evolution smart to pull this off? Actually, the authors didn't have much to say about evolution. Their only mention of evolution seems to falsify its expectations:

Given that the respiratory system is located in the head and the locomotory system is associated with the trunk, it is not unreasonable to assume that respiration and swimming would be decoupled. The contemporary view point is that the origin of the lung enabled respiratory-locomotor coupling to evolve in terrestrial animals.
Here, we discover that fishes swimming with body undulations also show respiratory-locomotor coupling....

But why would evolution optimize two or three things at once? Selection for traits can only act on immediate benefit from a random mutation. Like they say, "it is not unreasonable to assume" that selection for benefit in one trait would be independent of selection for other traits. The "contemporary view" may be that the lung "enabled" coupling, but if that were a useful idea, they would have said more about it. They didn't. We know from experience, however, that when engineers succeed in optimizing multiple things at once without tradeoffs, they win prizes and promotions for intelligent work.

More evidence that this work supports intelligent design is seen in their desire to imitate it. "The power of this simple control architecture is that it can be universally applied to any size and species of undulating fish, as well as to autonomous, underwater vehicles," they note. Yet the salmon seen bobbing their heads in Living Waters beat engineers to it. Engineers can just imitate what they see and win a design prize.

Non-Clogging Filters

Another case of intelligent design was announced in a second paper in Nature Communications. Biologists from the College of William and Mary liked this design so much, they immediately thought of how to apply it. Notice that the design is found in birds and mammals as well as fish.

Suspension-feeding fishes such as goldfish and whale sharks retain prey without clogging their oral filters, whereas clogging is a major expense in industrial crossflow filtration of beer, dairy foods and biotechnology products. Fishes' abilities to retain particles that are smaller than the pore size of the gill-raker filter, including extraction of particles despite large holes in the filter, also remain unexplained. Here we show that unexplored combinations of engineering structures (backward-facing steps forming d-type ribs on the porous surface of a cone) cause fluid dynamic phenomena distinct from current biological and industrial filter operations. This vortical cross-step filtration model prevents clogging and explains the transport of tiny concentrated particles to the oesophagus using a hydrodynamic tongue. Mass transfer caused by vortices along d-type ribs in crossflow is applicable to filter-feeding duck beak lamellae and whale baleen plates, as well as the fluid mechanics of ventilation at fish gill filaments.
A hydrodynamic tongue -- what a concept! Fish "engineer" previously unknown flow patterns to transport the particles they need into their esophagus. Those humpback whales seen in Living Waters use this technique as they gulp krill with their huge mouths, and the small tropical fish do it with their gills. The ducks in Flight: The Genius of Birds do it with their beaks. Who taught a fish, a duck, and a whale about fluid dynamics? It must have been natural selection. Tell us, please, how that came about:

In addition to the ecological and evolutionary relevance, these problems are of substantial interest to industrial filtration engineers who seek to reduce the major operating expenses associated with clogging.
One reads with bated breath for an evolutionary explanation that never comes.

As more than 30,000 fish species possess branchial arches that may form d-type ribs, potential vortex formation in the slots between branchial arches has substantial implications for the fluid dynamics of fish feeding and ventilation throughout ontogeny and evolution. Vortical cross-step filtration could be applicable to feeding in a diversity of fish species. In addition, many filtration structures involved in vertebrate suspension feeding are composed of d-type ribs in crossflow, including fish gill rakers, tadpole gill filters, bird beak lamellae and whale baleen plates, suggesting that principles of vortical cross-step filtration could have widespread application.
And that's it. That's all they have to say about evolution. This beautifully designed trait, so envied by engineers, is found in all these unrelated animals. How? Because "principles of vertical cross-step filtration" work, and are found all over the animal kingdom, they must have evolved. Does that make any sense?

It should be clear to anyone that intelligent design did the heavy lifting in both papers. Evolution played no role in the experimental setup, the explanation, or the application in either case. As usual, evolution only tags along in the role of post-hoc narrative gloss.

The 'talking ape' vs. Darwin

Language as an Evolutionary Conundrum
David Klinghoffer February 26, 2016 6:10 AM

In Chapter 10 of his new book Evolution: Still a Theory in Crisis, Michael Denton argues for the proposition that language and the higher intellectual faculties -- the gifts that uniquely make us human -- arose by saltation. In other words, they are gifts -- sudden ones. Denton's view, as he makes clear, has precedents reaching from Alfred Russel Wallace to linguist Noam Chomsky.

In a nice coincidence, Chomsky and MIT colleague Robert C. Berwick are just out with a book of their own, from MIT Press, provocatively titled Why Only Us: Language and Evolution. To be sure, Chomsky and Berwick are not advocates of Denton's structuralist take on the theory of intelligent design. Still, their own argument for language by saltation is not hard to reconcile with Denton's view.

The recognition that language poses a problem for Darwinian gradualism is presumably what makes linguist Vyvyan Evans uneasy about the book, which Dr. Evans reviews in New Scientist:

Their argument goes like this. As our capability for grammar is genetically programmed, and as no other species has language, it stands to reason that language emerged fairly suddenly, in one fell swoop, because of a random mutation. This is what the authors refer to as the "gambler's-eye view" in contrast to a "gene's-eye view" of evolution. The sudden appearance of language occurred perhaps no more than 80,000 years ago, just before modern humans engaged in an out-of-Africa dispersion.

A sudden "random mutation"...

But to be convinced by this, the reader has to swallow a number of sub-arguments that are debatable at best. For one thing, the authors presume the Chomskyan model of human language -- that the rudiments of human grammar (or syntax) are unlearnable without an innate knowledge of grammar. Its position seems less reasonable today that it once did.

Remember, as surly geneticist Dan Grauer formulates the 12th and final of his principles of new-Darwinism (Evolution News pointed this out yesterday), "Homo sapiens does not occupy a privileged position in the grand evolutionary scheme." A sudden gift, mutation, call it what you will, endowing our ancient ancestors alone with language is thus, on principle, to be disallowed. Language must be shared with other, non-human creatures. And so it is, Evans assures readers.

[R]esearch in primatology and animal behaviour suggests that some of the precursors for language do exist in other species, ranging from European starlings to chimpanzees -- with the latter using a sophisticated gestural form of communication in the wild. In fact, gesture may well have been the medium that incubated language until ancestral humans evolved the full-blown capacity for it.

Yet no one would confuse the most eloquent chimp "gestures" with modern sign language. That leaves in place the question of where language, whether communicating through hand or mouth, came from.

The "scientific consensus" cannot accept saltations of such a staggering kind:

Ultimately, Why Only Us is something of a curiosity. It takes a reverse engineering perspective on the question of how language evolved. It asks, what would language evolution amount to if the Chomskyan proposition of universal grammar were correct? The answer is language as a mutation that produces a phenotype well outside the range of variation previously existing in the population -- a macromutation. This flies in the face of the scientific consensus. Indeed, the book attempts to make a virtue of disagreeing with almost everyone on how language evolved.

Evans makes an interesting point. If the sudden mutation occurred in one person, it would provide no benefit since there would be no one to talk to. Did the "random mutation," the gift, then occur in a pair of individuals, living in the same time and place? Don't even think of going there. All parties to the argument are agreed on that. Evans:

The reader is asked to swallow the following unlikely implication of their logic: language didn't evolve for communication, but rather for internal thought. If language did evolve as a chance mutation, without precedent, then it first emerged in one individual. And what is the value of language as a communicative tool when there is no one else to talk to? Hence, the evolutionary advantage of language, once it emerged, must have been for something else: assisting thought.

For the spectator, it's not without pleasure to see evolutionists going at each other this way. Evans accuses Chomsky and Berwick of "reverse engineering" -- but more orthodox Darwinian "perspectives" do the very same thing. They assume the negation of the human exceptionalist view and impose that principle, as Evolution News suggested, on whatever is observed.

Every take on the origin of language that leaves the creative work entirely to one or more "random mutations" is doomed. We will be excerpting Denton's Chapter 10 in good time. Stay tuned.

Editor's note: Get your copy of Evolution: Still a Theory in Crisis now. For a limited time, you'll enjoy a 30 percent discount at CreateSpace by using the discount code QBDHMYJH.

Free dessert as well?

Get Out of Jail Free: Playing Games in an RNA World
Evolution News & Views September 23, 2013 5:20 AM

Four Darwinian mathematicians and biologists from New York University (one from Puerto Rico) think that RNA molecules played games to invent life. Even if the RNA could spontaneously form, why would mindless molecules scheme to create a universal, nearly optimal genetic code via a pointless game?

Jee, Sundstrom, Massey and Mishra, writing in the Royal Society Interface, ask, "What can information-asymmetric games tell us about the context of Crick's 'frozen accident'?" Francis Crick viewed the origin of the genetic code as an accident that caught on and became universal. But how did gene sequences become associated with polypeptide sequences having function? They know that the genetic code, as is, is pretty darn good:

The genetic code, the mapping of nucleic acid codons to amino acids via a set of tRNA and aminoacylation machinery, is near-universal and near-immutable. In addition, the code is also near-optimal in terms of error minimization, i.e. tRNAs recognizing similar codons may be mistaken for each other during translation, yet these mistakes often have no negative impact on translation because similar codons map to identical amino acids or ones with similar physiochemical properties. Biochemists have long wondered: If immutability and universality were early properties (i.e. the genetic code was a "frozen accident"), then how could natural selection encourage error-minimization? If selection for an error minimizing genetic code predated immutability and universality, then why is the standard code less than optimal? (Emphasis added.)

Although "numerous models have been proposed" to explain this "apparent paradox," they each have problems, such as "premature freezing" of the code, or in the case of neutral evolution, inability to explain the code's universality. So these guys enter the fray.

Like Crick, they know that hitting upon a functional enzyme by chance in the space of random polypeptides is improbable to the extreme:

Because of the relative length and complexity of modern enzymes, it may be possible that the earliest peptides were not enzymes in the traditional sense. To "accidentally" stumble upon genes encoding such enzymes at the same time an error minimizing code occurred by chance, as suggested by Crick, has vanishingly small probability.

Their job, therefore, is to find pointless polypeptides associating with pointless polynucleotides in some sort of "signaling game" that makes them both "help" each other over time until universality, immutability and optimality reach an equilibrium that just happens to be near maximum. Their very helpful tool in this endeavor is game theory:

As suggested by Maynard-Smith, games in a biological setting, unlike traditional ones in game theory, might not require "rational agents." A population of animals of the same species, for instance, may over the course of evolution behave according to game-theoretic principles even though none of those animals is a "rational agent," in a traditional sense. A species may "learn" over evolutionary time to select certain behaviors through random mutations, genetic drift, and selection, and ultimately reach a Nash equilibrium, in this case defined as an evolutionarily stable state in which each agent does not deviate strategies so long as all other agents in the system also do not deviate from their adopted strategies. "Utility" in the game-theoretic sense physically manifests as reproductive fitness.

They put "utility" in quotes, because it takes a rational agent to determine what is useful. What they are looking for is an equilibrium between mindless players aiming nowhere. Life and optimal coding become incidental byproducts of the equilibrium. Is there any other chemical reaction in nature that arrives at such coding specificity without trying? One might get an oscillation between states, but not a code that specifies a function.

Overall this paper presents a framework for studying signaling game dynamics in instances where both message length and distortion are factors in the utility of both senders and receivers. Although we have applied the framework here primarily to the evolution of the genetic code, similar analyses might be applied to the evolution of many other seemingly fixed processes, where the evolutionary clock appears to have frozen a biological process prematurely to an arbitrary conventional structure.

Well, best of luck. We find them personifying the molecules. The molecules adopt "strategies." They "learn" over evolutionary time. They send "information" or receive it, as they "signal" each other with "messages." Does this make any sense? Take out the words implying personality, goal and purpose, and the idea seems silly, much more so than for antelope strategizing to outwit a lion. These are just dumb molecules!

It's not necessary to delve into the equations of their "game," because math cannot rescue a bad premise. What we find them doing is weaving a fantastic tale in their own imaginations, starting with already-existing complex molecules in a mythical RNA world (which has its own problems).

It is usually hypothesized that the genetic code formed in the context of an RNA world, gradually exposed to an emerging amino acid world. We envision a scenario with two agents: proto-mRNA (strings of codons with information) and sets of proto-tRNA (RNAs with distinct anticodons, each able to bind a particular amino acid). In a given generation proto-mRNA and a particular set of proto-tRNA interact. The pair replicates via RNA replicase ribozymes. However, they may also chemically aid their own replication through the accurate production of proteins (possible identities of these proteins are stipulated in Discussion).

These gamers assume the existence of (1) RNA ribozymes capable of replication, (2) information, (3) transfer RNA with distinct anticodons, (4) accurate production of proteins. Who, we might ask, "usually hypothesized" such things? They should be dismissed from the science lab on account of "envisioning scenarios" instead of doing real chemistry.

Many other problems are completely ignored or glossed over in their visionary scenario, such as the problem of getting one-handed amino acids and sugars by chance. They also assume that natural selection would operate at the scale of molecules in an RNA world before life -- a fallacy, because natural selection requires not just replication, but accurate replication, accurate enough to avoid error catastrophe.

The news release from New York University, as expected, sanctifies this proposal as the inspired work of genius professors. It also won the uncritical acclaim of Science Daily and other news outlets: "Researchers have created a model that may explain the complexities of the origins of life." Be sure to thank the NSF for funding this paper in a down economy.

Well, It Could Happen

Throughout this weird paper, the authors display reckless imagination with frequent assertions that various miracles of chance "could" or "may" or "might" happen. (If a pig had wings, we all know, it "could" fly, provided it also had flight muscles, feathers, avian lungs, and all -- watch Flight.) Added to the heavy spicing of "possibility" words, they frequently endowed the molecules with goal-directed behavior, personifying them as willing game players. Here is but one egregious example from the abstract:

Such a framework suggests that cellularity may have emerged to encourage coordination between RNA species and sheds light on other aspects of RNA world biochemistry yet to be fully understood.

So, out of nowhere, "cellularity emerges" to "encourage coordination." Are you seeing any light that has been shed yet? Later, the personification, assumed goal-seeking, and speculation gets even worse:

The model presented here demonstrates that the modern genetic code evolved most likely by a combination of previously hypothesized forces, involving neutral and selective evolution. Whereas a natural predisposition toward an error-minimizing code is not a necessary condition for an optimized genetic code, neutral evolution may have been an important force in establishing universality. At the same time, selective pressure can provide a powerful impetus for a genetic code to move toward error-minimization and, somewhat surprisingly, also enforce its immutability so as to maintain compatibility with the genome.

Who does the enforcing? Who does the establishing? Who does the maintaining? Who follows an impetus to move toward error minimization? What is an error, anyway, to a mindless molecule? This is crazy, but not crazy enough for the Royal Society to publish it.

They get away with this because it fits the requirement of naturalism: "No intelligence allowed." Within that constraint, they follow Finagle's 6th Rule: "Do not believe in miracles. Rely on them."

Good-bye, RNA World

The authors feel somewhat justified in "envisioning" their make-believe "scenario" on the grounds that "Evidence for such a world [RNA world]... is growing." Too bad this paper came out about the same time that Steven Benner, a veteran origin-of-life researcher, poured cold water on the idea at the Goldschmidt Conference in Florence in August. Here's what he said happens to ribose (an essential sugar for RNA) and other biomolecules when exposed to the watery conditions assumed on the early earth, according to an NBC News article:

The early environment on Earth, however, was challenging to the rise of life as we know it, at least in Benner's view. One of the biggest challenges has to do with the process by which organic molecules gave rise to life's chemical building blocks: RNA, DNA and proteins.

If left to themselves, adding energy to organic molecules just tends to turn them into tar or an oily substance. That's what Benner calls the "tar paradox": How could organic materials ever give rise to biopolymers like DNA?

Science Magazine describes the depressing picture:

However and wherever life began, one thing is sure: Its first organic building blocks, called hydrocarbons, had a number of hurdles to clear before evolving into living cells. Fed with heat or light and left to themselves, hydrocarbons tend to turn into useless tarlike substances. And even when complex molecules like RNA (most biologists' best guess for the first genetic molecule) arise, water quickly breaks them down again.

The RNA-world scenario is so hopeless, in fact, that Benner took the extreme step of claiming that life must have formed on Mars (on dry ground under special conditions), and then got transported to earth via meteors. While some reporters leaped onto the sci-fi suggestion that "We may all be Martians!" (e.g., Space.com), thinking people will surely catch the cry of desperation in such a proposal.

Conclusions

So, even if one were willing to grant the time of day to Jee et al.'s "game theory" notion, Darwinians can't even get the starting materials to play with. It would be more realistic for them to start with balls of tar, and racemic biological gunk broken down by water.

Any way you slice it, the "game theory" approach of these imagineers is an exercise in futility. And that's before even thinking rationally about the problem of the origin of genetic information, discussed in depth in Stephen Meyer's Signature in the Cell.

What a crazy world Darwinism and methodological naturalism (MN) has bequeathed us. The way out is to relax the arbitrary MN rule, to think outside the naturalistic box, and once again, to follow the evidence where it leads. Optimized codes do not "arise" from "frozen accidents." From our universal experience, they are products of intelligent design. That's no game. That's no "scenario." It's reality.

Saturday 5 January 2019

On the Aristotelian soul and materialism.

“Emergence” and the Soul
Michael Egnor


Philosopher Tim O’Connor at Indiana University has a fine essay that asks, “Do We Have Souls?” He answers in the affirmative, and he provides insightful critiques of materialism and Cartesian dualism.

On Cartesian dualism:

Descartes [argued] a softened variant of Plato’s mind-body dualism. The material world ultimately consists in material particles wholly governed by mechanical laws of motion. The human soul is an immaterial substance, but (departing from Plato) its existence and proper functioning intimately depends, causally, on the healthy functioning of the brain. It is not naturally immortal; if it survives death, it must be a consequence of God’s sustaining it apart from the body. We still have a sharp dualism: bodies large and small generally operate according to principles distinct in kind from those according to which souls/minds do. Their convergence in the human brain has to be taken as a brute given, a contingent connection perhaps established by the power of God. 

Cartesian dualism, which is a crucial philosophical error despite its dualist assertion, is Platonic in nature. What’s wrong with Cartesian dualism is its abandonment of hylemorphic (matter-form) metaphysics. Descartes describes nature as an unnatural blending of two substances — res cogitans, the thinking substance, and res extensa, which is “matter” defined as that which is extended in space. As metaphysics, it is pitiful, and wholly lacks the coherence and elegant explanatory power of Aristotelian hylemorphism.

Materialism, in its modern form, is essentially Cartesian dualism with the res cogitans discarded. Materialists strip nature of all that is intelligible, and struggle unsuccessfully to provide a coherent understanding of the only thing left: matter, understood as mere extension in space.

O’Connor points out that without res cogitans the materialists’ reduction of Cartesian dualism offers merely a

reductive image of “man a machine.” It is essentially Descartes’ picture of reality minus souls. According to it, human persons, no less than inanimate chunks of the physical world, can be entirely understood (in principle) in terms of the interactions of the body’s basic parts. Psychological states that Descartes assigned to the soul are here taken either to be epiphenomenal — having no influence on other psychological states or bodily behavior — or as (somehow) consisting in complex states of the brain.

Many contemporary thinkers follow [materialists] in dismissing philosophical and religious talk of “the soul” as having no place within our ever-growing scientific knowledge concerning the embodied natures of human persons. But insofar as there is more than one notion of the soul, it may be no less misleading to state simply that there “is no such thing as the soul” than it would be to affirm its existence without qualification — one may be taken to deny not only unwanted associations but also others that one embraces or (as I will suggest) should embrace. Let us take a different, rehabilitative tack and use the word “soul” as a placeholder for whatever underlies the constellation of capacities of thought, emotion, and agency that we observe in mature, fully functioning human beings. Then our question shifts from the categorical Do we have souls? to the open-ended What is the nature of “the” soul (or “ensoulment”) and its current and future limits? This way of posing our question invites us to consider answers lying between the extremes offered by Descartes and [materialists].

O’Connor offers an Aristotelian alternative. He notes that materialist philosophers and scientists

have lost sight of the “Aristotelian” alternative. Aristotle’s specific philosophical account of objects as form-matter compounds is no more appealing to many of us than are his antiquated physics and biology. But his broader nonreductionist, nondualistic vision is very much worth developing in contemporary terms. A number of scientists and philosophers attracted to this vision have latched onto the term “emergentism,” and I will follow them here. But we should be careful to note that this term has meant different things to different thinkers. Here I mean a view on which human persons, other sentient animals, and possibly a wider array of complex systems are wholly materially composed while having irreducible and efficacious system-level features. These features are originated and sustained by organizational properties of the systems (in animals, by properly functioning brain and nervous systems) while also having in turn causal influence on components of the system in its evolution over time. That is, emergent systems involve an interplay of “bottom-up” and “top-down” causal factors. While they are not fundamental building blocks of the world in the way that fundamental particles or Descartes’s souls would be, they nonetheless are natural unities, causally basic entities.

O’Connor’s endorsement of a general Aristotelian approach to understanding the mind (soul) is welcome, but I am unsympathetic to the concept of “emergence” as having any value in the debate about the nature of the mind and its relation to the material brain. I don’t believe that “emergence” really represents an Aristotelian or Thomistic view, and emergence is fraught with metaphysical muck.

Emergence is generally taken to mean that higher order complex systems have different properties than we can infer from the lower order components of the systems. A classic example is “wetness” as an emergent property of water molecules. There is nothing in our understanding of H2O at the molecular level that would lead us to predict that billions of H2O molecules in the liquid state would feel “wet” on our fingertip. Wet is not a state described or predicted by quantum mechanics. Yet water certainly does feel wet. According to the emergentists, wetness is thus an emergent property of water molecules.

Emergentists describe the mind in an analogous manner. There is nothing in neurotransmitters or the biochemistry of neurons that would lead us to expect thoughts to emerge. Yet when neural tissue is properly organized, thought emerges. The mind, according to emergentists, is an emergent property of brain matter, just as wetness is an emergent property of water molecules.

There are several problems with this view. First, I don’t believe that it is appropriate to ascribe the concept of emergence to Aristotle. Aristotle was an essentialist: he believed that substances have essences that characterize them as a whole, and that the essence (roughly the composite of matter and form that can be defined) is not reducible to its parts. Aristotle saw essence as fundamental, and essence (the whole) as more real than parts. Emergentists, on the other hand, see the parts as fundamental and most real, and emergent properties (analogous to Aristotle’s essences) as elaborations on the fundamental (molecular) reality. Aristotle saw the fundamental reality as the whole. Emergentists see the fundamental reality as the parts — for the emergentist, the whole emerges from the parts, but is not the fundamental thing.

For materialists, emergence is a sort of get-out-of-jail free card. Materialism, taken seriously, is nonsense. Obviously, the mind cannot be explained wholly by reference to matter extended in space. Mental things share nothing — nothing — in common with matter. Thoughts are intentional (refer to other things), private, dimensionless, massless, not composite, etc. Matter is non-intentional (doesn’t inherently refer to anything else), public, has dimensions and mass, is composite, etc. Obviously, materialism as a metaphysical system has nothing to offer for the understanding of the mind.

Faced with this rather obvious impediment, some materialists invoke emergence. It may be true that matter shares nothing in common with thought, the materialist stammers, but thought emerges from matter. It’s a kind of magic that Houdini would admire. When you have no explanation, just say it happens and that explains it.

Emergence explains nothing. It merely means that the materialist has no explanation whatsoever for the mind, and would rather not dwell on the question.

And there are deeper problems with emergence as a theory of mind and as a metaphysical concept.

First, while it might be (marginally) coherent to invoke emergence to explain wetness and such, it is worth noting that emergence offers no possible explanation for differences in ontology. Water feels wet when touched, but the emergentist is not claiming that water molecules in becoming wet have become a fundamentally different kind of thing. The water molecules haven’t changed. They are still just water molecules, nothing more or less. They just feel wet.

But emergence as applied to the brain and mind, unlike to water and wet, asserts an ontological difference. Emergentists assert that brain tissue becomes a completely different kind of thing — a thinking thing — when the mind magically emerges from matter. As a metaphysical concept, emergence applied to water and the like is tenuous enough. It plainly offers no explanation whatsoever for how a completely different kind of thing — mind — is produced by brain matter. That’s a bridge too far, even for Houdini.

The second problem for emergence as an explanation for the mind can be understood by considering what we mean when we say that a property of a whole emerges from its parts. What we invariably mean when we talk about emergence in the natural world is that the whole is perceptually different from its parts. Water doesn’t really become something different when we take billions of water molecules and put them on our fingertip. It just feels different from what we would have expected, given our knowledge of physics of individual water molecules.

Emergence always refers to a perceptual or intellectual surprise. A property is said to be emergent if we didn’t expect it to be characteristic of a whole based on our understanding of its parts. Emergence is a mental phenomenon. It is a perceptual surprise, not a magical property somehow evoked by adding a lot of little parts together.

Emergence, as a perceptual surprise, can’t explain the mind because emergence presupposes the mind. A genuine explanation can’t presuppose that which it purports to explain.


Emergence as a theory of mind is junk philosophy and junk science. It’s circular reasoning — the “emergent” explanation for the mind-brain relationship boils down to “It’s surprising!” The only reason to invoke emergence is to defend materialism from refutation by reality. It is a tactic, not an explanation. As such, it is a favorite of materialists, who, lacking explanations, are in dire need of tactics.