Search This Blog

Wednesday, 17 August 2022

Darwinism continues to devolve.

 Mammoth Support for Devolution

Michael Behe


The more science progresses, the more hapless Darwin seems.


In my 2019 book Darwin Devolves I showed that random mutation and natural selection are powerful de-volutionary forces. That is, they quickly lead to the loss of genetic information. The reason is that, in many environmental circumstances, a species’s lot can be improved most quickly by breaking or blunting pre-existing genes. To get the point across, I used an analogy to a quick way to improve a car’s gas mileage — remove the hood, throw out the doors, get rid of any excess weight. That will help the car go further, but it also reduces the number of features of the car. And it sure doesn’t explain how any of those now-jettisoned parts got there in the first place.


The Bottom Line

The same goes for biology. Helpful mutations that arrive most quickly are very much more likely to degrade genetic features than to construct new ones. The featured illustration in Darwin Devolves was the polar bear, which has accumulated a number of beneficial mutations since it branched off from the brown bear a few hundred thousand years ago. Yet the large majority of those beneficial mutations were degradative — they broke or damaged pre-existing genes. For example, a gene involved in fur pigmentation was damaged, rendering the beast white — that helped; another gene involved in fat metabolism was degraded, allowing the animal to consume lots of seal blubber, its main food in the Arctic — that helped, too. Those mutations were good for the species in the moment — they did improve its chances of survival. But degradative mutations don’t explain how the functioning genes got there in the first place. Even worse, the relentless burning of genetic information to adapt to a changing environment will make a species evolutionarily brittle and more prone to extinction. The bottom line: Although random mutation and natural selection help a species adapt, Darwinian processes can’t account for the origins of sophisticated biological systems.


In Darwin Devolves, I also mentioned work on DNA extracted from frozen woolly mammoth carcasses that showcased devolution: “26 genes were shown to be seriously degraded, many of which (as with polar bear) were involved in fat metabolism, critical in the extremely cold environments that the mammoth roamed.” It turns out that was an underestimate. A new paper1 that has sequenced DNA from several more woolly mammoth remains says the true number is more than triple that — 87 genes broken compared to their elephant relatives. The authors write of the advantages provided by destroyed genes (references omitted for readability):


Gene losses as a consequence of indels and deletions can be adaptive and multiple case studies investigating the fate of such variants have uncovered associations between gene loss and mammalian phenotypes under positive selection. In laboratory selection experiments, gene loss is a frequent cause of adaptations to various environmental conditions. Given that we focused on those indels and large deletions that are fixed among woolly mammoths, the majority of these protein-altering variants likely conveyed adaptive effects and may have been under positive selection at some point during mammoth evolution. We did not find specific biological functions overrepresented among these genes (see methods), but many of the affected genes are related to known mammoth-specific phenotypes, such as total body-fat and fat distribution (EPM2A, RDH16, and SEC31B), fur growth and hair follicle shape and size (CD34, DROSHA, and TP63), skeletal morphology (CD44, ANO5, and HSPG2), ear morphology (ILDR1 and CHRD), and body temperature (CES2). In addition, we find several genes associated with body size (ZBTB20, CIZ1, and TTN), which might have been involved in the decreasing size of woolly mammoths during the late Pleistocene.


There’s Lots More

The point is that these gene losses aren’t side shows — they are the events that transformed an elephant into a mammoth, that adapted the animal to its changing environment. A job well done, yes, but now those genes are gone forever, unavailable to help with the next change of environment. Perhaps that contributed to eventual mammoth extinction.


As quoted above, the mammoth authors note that gene losses can be adaptive, and they cited a paper that I hadn’t seen before. I checked it out and it’s a wonderful laboratory evolution study of yeast.2 Helsen et al. (2020) used a collection of yeast strains in which one of each different gene in the genome had been knocked out. They grew the knockout yeast in a stressful environment and watched to see how the microbes evolved to handle it. Many of the yeast strains, with different genes initially knocked out, recovered, and some even surpassed the fitness of wild-type yeast under the circumstances. The authors emphasized the fact of the evolutionary recovery. However, they also clearly stated (but don’t seem to have noticed the importance of the fact) that all of the strains rebounded by breaking other genes, ones that had been intact at the beginning of the experiment. None built anything new, all of them devolved.


Well, Duh

That’s hardly a surprise. At least in retrospect, it’s easy to see that devolution must happen — for the simple reason that helpful degradative mutations are more plentiful than helpful constructive ones and thus arrive more quickly for natural selection to multiply. The more recent results recounted here just pile more evidence onto that gathered in Darwin Devolves showing Darwin’s mechanism is powerfully devolutionary. That simple realization neatly explains results ranging from the evolutionary behavior of yeast in a comfy modern laboratory, to the speciation of megafauna in raw nature millions of years ago, and almost certainly to everything in between.


References

Van der Valk, Tom, et al. 2022. Evolutionary consequences of genomic deletions and insertions in the woolly mammoth genome. iScience 25, 104826.

Helsen, J. et al. 2020. Gene loss predictably drives evolutionary adaptation. Molecular Biology and Evolution 37, 2989–3002.


Saturday, 6 August 2022

Common descent v. Common design again.

 Why Their Separate Ancestry Model Is “Wildly Unrealistic”

Emily Reeves

In my post yesterday I outlined how Erika, aka the popular evolution YouTuber “Gutsick Gibbon,” critiqued my earlier post, which commented on an important paper in the field of phylogenetics, Baum et al. (2016), which purported to test separate ancestry. Between 7:55 and 9:24 of her response video, Erika shows a diagram (above) to respond to my point that the Baum et al. (2016) paper tested a model of separate ancestry that is not endorsed by anyone in the ID community.

Here’s what Erika, aka Gutsick Gibbon, is saying. In the diagram, she has two different models of what creationists (left) and intelligent design (ID) proponents (right) might be saying. (Note that she disagrees with both; she’s just trying to describe what she thinks the groups are saying.) Each “finger” in the diagram is supposed to represent an instance where the designer acted to influence the course of biological history. The left tree is supposed to show what she thinks the creationist’s model is that was tested by Baum et al. (2016). She mistakenly thinks that ID proponents are angry because we’re really putting forth some model like the diagram on the right — where a designer creates a group but then allows evolutionary tinkering. So, she thinks we’re upset because Baum et al. (2016) didn’t include tinkering in their model. This is actually not the case.

When I argued that Baum et al. (2016) failed to properly test separate ancestry, that has nothing to do with a failure to incorporate “tinkering” into the model. Also, as a side note, ID proponents do not advocate a “tinkering” hypothesis. This is a common misconception about the ID view. Instead, the primary objection of ID proponents to the Baum et al. (2016) paper is due to how the separate ancestry and family ancestry models were created in the first place. In short, Baum et al. (2016) assumes that shuffling of the synapomorphies is an accurate model for separate ancestry. ID proponents and others who have a design-based perspective would heartily reject that, for reasons I will explain.

How They Chose Data for the Separate Ancestry Model
Baum et al. (2016) uses several different datasets to test separate ancestry. Their molecular dataset, our focus here, is from a 2011 paper by Perelman et al. where 54 genes were used to construct a molecular phylogeny of living primates. (We’ll call this the “Perelman dataset.”) Primates all have around 30,000 genes, so the first question is how the authors got from 30,000 genes to 54. Note the details given from Perelman et al. (2011):

A complete list of 54 primer sets used in this study is presented in Table S2. This list includes primers from earlier studies (Murphy et al. 2001), as well as those designed specifically for this study using a unique bioinformatics approach (Pontius, unpublished data). (Perelman et al. 2011)

If you look at Table S2 in you can see that the majority (38 of the 54 genes) came either from Murphy et al. (2001) (9 genes) or were specially designed for the study and no details are given (29 genes). For our purpose, let’s just look at the 9 taken from Murphy et al. (2001) which are described as being selected in the following way:

The GenBank and UniGene databases (NCBI) were searched for genes with exons of sufficient length (>200 bp) and variability (80±95% nucleotide identity between mouse and human), thereby providing adequate variation for the purpose of phylogenetic and somatic cell/ radiation hybrid mapping. (Murphy et al. 2001)

What’s Happening Here? 
Two types of selection or filtering are going on when they choose genes for their study. The first selection is to remove genes not present in all the species they were studying. In other words the genes had to be within the databases, be of sufficient size, and exist in all the species being considered. This rules out species-specific genes such as orphan genes. The second selection is that genes having the greatest number of phylogenetically informative sites or synapomorphies were chosen. (A synapomorphy is a variant/trait that is shared by at least two descendent taxa and thought to be inherited from a most recent common ancestor, where it evolved.) In order to win at this second selection, a gene should have the greatest number of variants which differ between at least two taxa, but in the same way (i.e., at site 1 two taxa have ‘A’s while the ancestral site was a ‘G’). A simpler way of putting this is they picked genes that varied the most between species, but in a likely functional way. If you look at Table 1 from Murphy et al (2001) you can see that the ADORA3 gene has 191 phylogenetically informative sites out of 330 base pairs. That means that at 191 positions it differs between at least two of the comparison taxa in a consistent way. To help better understand why these filters are stacking the deck from a design perspective, I want to give an analogy.

Let’s do a thought experiment where we want to create a distance tree demonstrating the evolution of household chairs. We choose five traits to compare. Those five traits must be exhibited by all the chairs and the five traits must vary between at least two chairs. This type of selection effectively eliminates unique properties of the chairs such as leather because not all chairs have this. Instead it prioritizes specific parts of chairs that were made intentionally different for functional or economic reasons. This type of selection on a designed object, such as a chair, will create an intuitive hierarchical tree even though the chairs are not actually related. How? Read on.

For example, screws, which hold chairs together, are likely to be a trait shared by all chairs (first criterion passed), but screws aren’t likely to vary a lot between chairs and therefore wouldn’t be selected as a trait. On the other hand, legs are a trait shared by all chairs and these are likely to vary quite a bit between different chairs based on the function of the chair. Children’s chairs will have shorter legs. Decorative chairs will have aesthetically pleasing legs. Folding chairs will have collapsible legs. Some chairs may have four legs while others five or even more. Seats are another example of a trait that will be common to all but differ a lot. Children’s seats on chairs will be smaller. Decorative chairs will have aesthetically pleasing seats. Collapsible chairs will have mobile seats. Thus, a selection for “differentness” with designed objects enriches for traits that cluster due to functional constraints or compatibility, not ancestry. If organisms are in fact designed, a very similar phenomena could be occurring in these phylogenetic comparisons.

Now let’s look at how this data set was used by Baum et al. (2016) in their separate ancestry model and what about the model is so problematic from an ID perspective.

They Used Synapomorphy Shuffling to Test Separate Ancestry
In describing the separate ancestry model Baum et al. (2016) says:

A key feature of the species SA [separate ancestry] model is that for each character [meaning genetic variants or fossil characters] the state drawn by each species is independent of that drawn by other species.

But what do they mean by saying that the state drawn by each species is “independent” of that drawn by another species? How are they actually creating their separate ancestry model? What I’ve gathered is this essentially means that in their “separate ancestry” model, the traits or synapomorphies were shuffled randomly to create a hypothetical model of what they think separate ancestry would be. I will illustrate with an example in Figure 1 adapted from the molecular Perelman dataset, where I took actual names of genes used by Baum et al. (2016), but then represented the different synapomorphies of those genes as spelling changes for the function of the gene.




To elaborate, in Figure 1 above are genes ABCA1, BNDF, AFF2, APP, and ATXN7. I have represented their DNA sequences simply as lowercase letters (transport, memory, splicing, migration, and cytoskeleton respectively) corresponding to their major functions. Then, to represent the synapomorphies between these organisms I’ve introduced some spelling errors. For example, in Figure 1A the ABCA1 gene in organism 1 is the sequence is transwort (lowercase) and BNDF is menory, AFF2 is splycing, APP is megration, and ATXN7 is cytosqeleton respectively. The pattern of changes within the gene (columns) are the same for all five genes in Figure 1A — notice how top to bottom the phylogenetic trees are the same. Thus, Figure 1A represents the data observed with one important caveat — I have artificially made the pattern of synapomorphies perfect (CI =1) just for clarity (not the case in the real data).

Now, the authors Baum et al. (2016) constructed their separate ancestry model by permuting (shuffling) the synapomorphies of these sequences (Figure 1B), in a random manner which assumed there would be no reason to find correlations of traits across different organisms. Here’s how they described their methods:

To evaluate whether the observed hierarchical signal is more than expected under species of family SA, we used the PTP test which uses a Monte Carlo approach to simulate data under the SA hypothesis. We implemented PTP tests using the permute function of PAUP* ver. 4.0a134-146 with parsimony as optimality criterion and hence tree length as a measure of tree-like structure.

In other words, Baum et al. (2016) gave the synapomorphy of the ABCA1 gene sequence of organism 2 to organism 4 and vice versa using a permute function (see Figure 1B). They did not just swap the whole genome sequence between two organisms but swapped individual characters — in this case base pairs — to remove the connection between them (notice how top to bottom the colors are scrambled). As expected, after random shuffling of the synapomorphies the tree length drastically increased — meaning more evolutionary events were required to explain the data, and the tree was not very parsimonious (see Table 1 from Baum et al. (2016)).

Following this they calculated the p-values (see Table 3 from Baum et al. (2016)). The p-values are outrageously low, dictating a strong rejection of the separate ancestry model tested. Erika interprets this here as indicating that at least this model of separate ancestry is a totally unreasonable hypothesis. Their method is given below:

For many of the tests the observed test statistic fell well outside the range of values obtained under the SA hypothesis. In such cases, we report the distance of the observed data from the mean of the SA distribution in units of the SD (the z-score) and also provide a P-value assuming a normal distribution. Although the latter is only an approximation, it will provide the reader with a sense of how improbable the data would be under SA. 

Of course, I reject this model of “separate ancestry” as well. This model, as they’ve developed it, is totally unrealistic for all kinds of reasons and therefore far less likely than the observed model. Now, anytime one observes this type of data in biology it should definitely make one question if not immediately reject the model. Erika also partially recognizes this because in reference to these zeros she also says “That’s insane! You don’t see that in regular science.” Basically, p-values this low typically indicate that there is something really wrong with one’s model. So, let’s talk about this, and what might be wrong with their model of separate ancestry.

Synapomorphy Shuffling is Not a Good Test of Separate Ancestry
The short(er) explanation for why synapomorphy shuffling is not a good model of separate ancestry is that synapomorphies or traits may cluster for designed systems based upon functional reasons like optimization, constraints, or compatibility. Recall that a synapomorphy is a trait or site uniquely shared by members of a group that helps to define that group. Under typical phylogenetic thinking, synapomorphies are thought to exist because they evolved in the common ancestor that gave rise to the group. But in an ID-based world, synapomorphies might exist because they represent a suite of traits required for a group of organisms to perform some important function related to their survival. 

In designed systems, traits don’t vary randomly, and often vary according to predictable patterns which may be related to functional needs. In biology, these functional needs could be related to an organism’s niche, lifestyle, locomotion, metabolism, diet, or other behaviors. In other words, organisms which live in similar niches and/or have similar lifestyles, modes of locomotion, metabolisms, diets, or other behaviors, may tend to have similar traits all related to functional constraints that are required for that organism to survive in its environment. Thus, in a designed biosphere, traits won’t vary randomly but will follow similar patterns, correlations, and relationships across organisms according to their various survival needs. To put it simply, organisms with similar lifestyles will show similar architecture. This will be true not because of common ancestry but because of design constraints which must be fulfilled for an organism to survive in its environment. 

Now, the Long(er) Answer… 
ID proponents have a problem with this model of separate ancestry because it does not account for anticipated taxa-specific design constraints (aka what I am calling “functional synapomorphies”). Most ID proponents would hold that only synapomorphies that are historical in nature could be shuffled in such a fashion. As a thought experiment, if one selected synapomorphic chair traits, a very nice nested hierarchical pattern will result. Collapsible chairs will cluster, desk chairs will cluster, armchairs will cluster, children’s chairs will cluster, and the universal common ancestor might be something like a stool. Thus, if a synapomorphy is functional (i.e., contributes to the function) and not historical, this random shuffling of synapomorphies would be analogous to taking chair-specific design differences (like a collapsible seat and short legs for a children’s chair), mixing them up, and then observing that collapsible chairs and children’s chairs no longer group together. When you shuffle functional traits of designed objects, you will get statistical zeros, because you have obliterated the design signal. Most likely you’d also get some quite weird designs that don’t work very well! Imagine outdoor patio furniture with traits of indoor office chairs. It wouldn’t work!

Given how the data were selected in the first place, it is very likely that many of these synapomorphies are functional.

The reason why functional synapomorphies cannot be used is because hierarchical clustering of functional synapomorphies or traits are abundant in scenarios that we know have not arisen due to a process of descent with modification. Don’t like my chair analogy? Take the distance tree, created by Doolittle and Bapteste 2007, of French departments based on the number of shared sur-names (See Figure 1b in the paper). This is a great example of how functional synapomorphies or traits can result in logical clustering of data when no descent with modification process has occurred. Baum et al. (2016)’s error is therefore as follows: They assume that design must produce random distributions of traits. However, all of our experience with sets of designed systems shows this is not the case. Erika doesn’t appreciate this point, and thus she misunderstands our critique of the Baum et al. paper.

What we know about design, from engineering and other life scenarios, is that design often creates a hierarchical similarity pattern centered around function that could look like ancestry if one forces it. Why do designers produce these hierarchical patterns? They aren’t trying to be deceptive, mimicking systems that look like they are the product of common ancestry. Rather, designers are simply applying logical design considerations like optimization, constraints, compatibility, dependencies, or reuse during the design process.

Thus, I hold that the model of separate ancestry rejected in the Baum et al. (2016) paper is not endorsed by most in the ID community because it does not account for the design expectation that functional synapomorphies or traits will cluster due to optimization, constraints, and a need for compatibility.

On Monday, I will look at the consistency of the phylogenetically informative sites for the Baum et al. (2016) paper. Spoiler alert: It looks like design.


Friday, 5 August 2022

Yet more on the struggle for the "empire of God"

 <iframe width="1019" height="573" src="https://www.youtube.com/embed/Hw8m1WrQ0l0" title="Battle of Nicopolis, 1396 (ALL PARTS) ⚔️ Christians strike back against the Ottomans ⚔️ DOCUMENTARY" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

Primeval tech of the "lowly" sponge

 Complex Specified Information in the Lowly Sponge

David Coppedge


Sponges are outliers in biology’s big bang, the Cambrian explosion. Their embryos appear in Precambrian strata, leading some to consider them primitive. That’s an illusion. New studies of how they construct their skeletons with silica “spicules” have revealed design principles remarkable enough to inspire biomimicry. 


The punch line first — here’s how a news item from Current Biology concludes:


“This work not only sheds new light on skeleton formation of animals, but also might inspire interdisciplinary studies in fields such as theoretical biology, bioengineering, robotics, and architectural engineering, utilizing mechanisms of self-constructing architectures that self-adjust to their environments, including remote environments such as the deep sea or space,” the researchers write. [Emphasis added.]


Goodness! What are these simple animals doing to arouse such commotion? Just watch the video clip in the article of sponge cells at work. Then, look at the Graphical Abstract in the source paper and see the steps diagrammed in well-organized stages: (1) spicules are manufactured in specialized cells, then transported to the construction site; (2) the silica spicules pierce the epithelial tissue; (3) they are then raised up into position; (4) the bases are cemented by collagen provided by basal epithelial cells.


This simple animal knows, in short, how to build a house with pole-and-beam architecture in a way that self-adjusts to its environment.


That’s Pretty Impressive

Sponge skeletons, with their unique spicules, have been studied for a long time, but the manner of construction has been a mystery till now. What’s new, according to the Japanese researchers, is the identification of specialized “transport cells” that carry and finally push the spicules through the epithelia, and cementer cells that fasten them in place like poles. The process reveals division of labor and an overall plan.


Here we report a newly discovered mode of skeleton formation: assembly of sponges’ mineralized skeletal elements (spicules) in locations distant from where they were produced. Although it was known that internal skeletons of sponges consist of spicules assembled into large pole-and-beam structures with a variety of morphologies, the spicule assembly process (i.e., how spicules become held up and connected basically in staggered tandem) and what types of cells act in this process remained unexplored. Here we found that mature spicules are dynamically transported from where they were produced and then pierce through outer epithelia, and their basal ends become fixed to substrate or connected with such fixed spicules. Newly discovered “transport cells” mediate spicule movement and the “pierce” step, and collagen-secreting basal-epithelial cells fix spicules to the substratum, suggesting that the processes of spiculous skeleton construction are mediated separately by specialized cells. Division of labor by manufacturer, transporter, and cementer cells, and iteration of the sequential mechanical reactions of “transport,” “pierce,” “raise up,” and “cementation,” allows construction of the spiculous skeleton spicule by spicule as a self-organized biological structure, with the great plasticity in size and shape required for indeterminate growth, and generating the great morphological diversity of individual sponges.


Inspiring Architects

This method of skeleton construction differs greatly from arthropods and vertebrates. It doesn’t appear to follow a set of rules or a preordained pattern, but it is very effective for sponges, “whose growth is plastic (i.e. largely depends on their microenvironment) and indeterminate, with great morphological variations among individuals.” Nevertheless, design and coordination is evident in the division of labor, the specialization of cells, and the end result that is good enough to inspire architects. If it were so simple, the authors would not have left many questions unanswered:


Many precise cellular and molecular mechanisms still remain to be elucidated, such as how transport cells can carry spicules, or how one end of pierced spicules is raised up. Additionally, one of the further questions that need to be answered is how sponges fine-tune their skeleton construction according to conditions of their microenvironment, such as water flow or stiffness of the substratum, since it is reported that the growth form of marine sponges changes according to the water movement of their environment.


Design is also evident in the self-organizational principles encoded in sponge DNA that make these results successful. Human intelligent designers would like to benefit from this knowledge. The authors conclude, repeating the “punch line”:


Intriguingly, our study revealed that the spiculous skeleton of sponges is a self-organized biological structure constructed by collective behaviors of individual cells. A chain of simple and mechanical reactions, “transport-pierce (by transport cells)-raise up (by yet unknown cells and/or mechanisms)-cementation (using collagenous matrix secreted by basopinacocytes and possibly by spicule-coating cells),” adds a spicule to the skeleton, and as a result of the iteration of these sequential behaviors of cells, the spiculous skeleton expands. As far as we know, this is the first report of collective behaviors of individual cells building a self-organized biological structure using non-cellular materials, like the collective behaviors of individual termites building mounds. Thus, our work not only sheds new light on skeleton formation in animals but also might inspire interdisciplinary studies in fields such as theoretical biology, bioengineering, robotics, and architectural engineering, utilizing mechanisms of self-constructing architectures that self-adjust to their environments, including remote environments such as the deep sea or space.


The reference to termite mounds is apt. The journal Science has described how these mounds, built by hundreds of individual termites, are able to “breathe” like an “external lung”:


Here’s how it works: Inside the hill is a large central chimney connected to a system of conduits located in the mound’s thin, flutelike buttresses. During the day, the air in the thin buttresses warms more quickly than the air in the insulated chimney. As a result, the warm air rises, whereas the cooler, chimney air sinks — creating a closed convection cell that drives circulation, not external pressure from wind as had been hypothesized. At night, however, the ventilation system reverses, as the air in the buttresses cools quickly, falling to a temperature below that of the central chimney. The reversal in air flow, in turn, expels the carbon dioxide-rich air — a result of the termites’ metabolism — that builds up in the subterranean nest over the course of the day, the researchers report online this week in the Proceedings of the National Academy of Sciences.


We know that some caves “breathe” as the temperature changes, but this is different. Termites construct their mounds for a purpose: to control the temperature and remove carbon dioxide for their health. It’s a bit like active transport in cells that draws in what the cell needs and removes what it doesn’t need, using machines that work against natural concentration gradients.


Intelligent Self-Organization

We all know that some beautiful things can self-organize without programming (snowflakes are a prime example). What we see here, though, are systems working from genetic programs for a purpose. In the case of sponges, its specialized cells cooperate in a plan to build a skeleton that adapts to the environment. In the case of termites, each individual insect’s genetic program makes it behave in a cooperative enterprise to build an air-conditioned mound. Such things do not arise by unguided natural forces. 


If functional self-organization were simple, why did five European countries take years “working to design the European Union’s first autonomously deployed space and terrestrial habitat”? The effort, called the “Self-deployable Habitat for Extreme Environments” (SHEE) project, has a goal of programming elements for “autonomous construction” of housing for astronauts on Mars or other hostile locales. It took years of work in design, prototyping, construction, and optimization to get these buildings to “self-deploy” with no humans in the loop.


So when a sponge can do it, we should see intelligent design behind the scenes — not the sponge’s intelligence, which admittedly is minuscule, but intelligence as a cause for the genetic information that allows the sponge to run a program that leads to a functional result. 


Those of us who appreciate the spectacular genetic programs that built the Cambrian animals should take note of the level of complex specified information in the lowly sponge. We can also notice that the sponge’s mode of construction bears no evolutionary ancestral relationship with the diverse, complex body plans that exploded into existence in the Cambrian strata. Sponges did well. They’re still with us. 


This article was originally published in 2015.


Emperor Darwin's new clothes?

 Darwin, Group Think, and Confirmation Bias

Neil Thomas


Yesterday I wrote about Charles Darwin and the British secularist tradition. The latter is the subject of a great volume of material discovered by Timothy Larsen, author of Crisis of Doubt: Honest Faith in Nineteenth-Century England. Larsen’s book provides a clue (as I read matters) that might be used to resolve what I in the company of many others find to be one of the greatest historical cruxes of 19th-century history. I refer here to the matter of how Darwinian theory, despite its lack of empirical support or even semblance of verisimilitude, was able to advance to its present position of orthodoxy. Since I suspect that the answer to this conundrum is most likely to be found in the area of group psychology, I would beg leave to make a small detour here to consider the issue of what we now term “confirmation bias” as that theme was treated by one of history’s most perceptive analysts of human nature, William Shakespeare.


Those acquainted with his drama Othello will know that Shakespeare’s psychological insight is nowhere more apparent than in his depiction of the negative dynamic between the eponymous hero and his villainous lieutenant, Iago. Although the exact motivation(s) behind Iago’s resentment of his military superior still remain a matter of critical debate, the reason for Othello’s unfounded jealousy of Michael Cassio for supposedly having committed adultery with Othello’s wife, Desdemona, is all too clear. Much in the play is made to hinge on the notorious prop of Desdemona’s handkerchief which Iago had contrived to misappropriate and plant in Cassio’s rooms and which Othello is duped into taking as “ocular proof” of Desdemona’s adultery. Iago’s malign stratagem works perfectly. As he had predicted, his chosen mark proved to be “as easily led by the nose as asses are.” Othello, witnessing the handkerchief, tragically succumbs to his own paranoid insecurities, and jumps to the wholly erroneous conclusion that his wife must be an adulteress with the colorful ladies’ man Cassio. Iago, in what is surely a particularly spot-on description of such confirmation bias, soliloquizes in an aside audible only to the audience,


I will in Cassio’s lodging lose this napkin


And let him [Othello] find it. Trifles light as air 


Are to the jealous confirmations strong


As proofs of holy writ. 


OTHELLO, ACT 3, SCENE TWO, LINES 373-6

“Trifles light as air/Are to the jealous confirmations strong/as proofs of holy writ”: in other words, once some particular thought, however poorly substantiated, has for whatever reason become lodged in our minds, it tends to develop into a Freudian idée fixe and all our future perceptions are somehow made to be congruent with that original idea. It is precisely for that reason in modern jurisprudence any information leading towards possible confirmation bias must be withheld from a jury to prevent it from jumping to conclusions.


Othello’s sexual insecurities were to have, mutatis mutandis, something of a 19th-century correlative in the insecurities and loss of nerve in matters of faith which were developing in a significant number of Victoria’s subjects in the first half of that century. All this was of course well before publication of the Origin, so that Darwin’s magnum opus will have served only to confirm and strengthen their mood of skepticism in a way comparable to that in which Desdemona’s handkerchief served to convince her husband (wrongly) of her infidelity. The empirically demonstrable truth-value of the Origin might have been negligible (in Shakespearean terms, a “trifle light as air”) but that mattered not a jot to persons already primed by their prior ideological formation to accept Darwin’s argument as a form of secular gospel. The Origin will have come together with their prior misgivings to create a “resultant of forces” precipitating an even greater degree of secularist thinking in many who had in any case all but bidden adieu to the religion of their youth. There is, however, some firm historical evidence that some of the more self-critical secularists were to experience a light-bulb moment in later life which prompted them to reassess their previous stance.


Reconversions

The secularists chosen for study by Larsen all eventually returned either to the faith they had initially rejected or to some other form of spiritual orientation. Typically, they would find over time that secularism offered no positive program for people to live by. Gordon came to refer to secularism as “just what you like-ism” which he took to be a recipe for immoral self-indulgence. William Hone came to realize that materialism could not account for the totality of human experience — there must be a power behind matter. There was a general feeling amongst the reconverts that their erstwhile skepticism might have been the result of “a procrustean system of logic, an oppressively narrow definition of reason. They came to believe that human beings knew more than could be proven by such a method.”1 Notably, the reconverts were also prompted by dint of lived experience and maturer reflection to revisit what seems to have been a very basic unexamined assumption amongst their number with the result that they now at long last “reassessed their assumption that the cause of radical politics and the working classes naturally led to an opposition to Christianity.”2


Not all returned to orthodox Christian forms of worship. Some “were led away from materialism by reengaging with the realm of spirit in a form decoupled from Christianity.”3 In this they made common cause with other, more famous secularists of the age such as Annie Besant, Charles Bradlaugh’s close ally, who went as far as crossing the floor from secularism to Madame Blavatsky’s theosophy. One may also think of Sir Arthur Conan Doyle whose hero, Sherlock Holmes, was the very apotheosis of dry secularism but whose author, although he had lost his faith in earlier years, was eventually to turn to spiritualism. Or in more recent memory there is the example of that later scion of the Huxley dynasty, Aldous, who turned in later years from a form of positivist philosophy to embrace Eastern mysticism and the so-called perennial philosophy.4


In a more minor key we can trace a comparable development in the rise in popularity of the English ghost story in the second half of the 19th century. This may in good part be understood as an imaginative protest against the growing desacralization of the world brought about by the burgeoning age of science. As Julia Briggs pointed out, the ghost story was in a superficial sense designed to scare readers but at a profounder level it supplied them with the comfort of a deeper spiritual reassurance:


For it [the ghost story] seemed at the outset to invite the reader’s modern cynicism, only to vanquish it with a reassertion of older and more spiritual values. Even amongst its superficial terrors it might provide subtle reassurances.5


Larsen, dissociating himself from the kind of “God’s funeral” historiography practiced by such writers such as Basil Willey, A. O. J. Cockshut, and A. N. Wilson,6 interprets such reversions as symbolizing a victory of the spiritual over the exclusively material worldview and goes so far as to claim that the reconverts “serve to orientate us toward the intellectual strength of the Christian tradition in nineteenth century tradition.”7 Whether such a large historical revision is warranted by the statistically limited sample of persons he adduces in his book may be open to question. What is not in question, however, is the service he has rendered in going beyond the top-down historiography practiced by many other historians of ideas. Instead, he has revealed a largely unsuspected but quite sizeable demographic of self-educated people who, although they were far distant from the major levers and megaphones of power, exerted a considerable influence in shaping ordinary people’s attitudes to fundamental existential issues. 


This finding is particularly significant since it throws light on the major historical crux mentioned above relating to how Darwin was able to “palm off” an empirically unattested theory on so many of his countrymen and women. We already know from Ellegård’s classic study of British press reactions to Darwin in the latter half of the 19th century that some sections of society simply resisted and disbelieved Darwin.8 Larsen’s researches, on the other hand, indicate that Darwin had a more forgiving and considerably less critical constituency of honest doubters and militant secularists to rely on. For that group Darwin’s work came to confirm what they had either already been persuaded of or else begun to figure out for themselves on other grounds. They were willing to give his theory a pass because it suggested an atheistic conclusion they had already arrived at by an alternative route. It was not Darwinism that they endorsed so much as the ideological direction in which his Origin of Species was thought to point, and so, like Thomas Huxley, they were more than willing to give Darwin their enthusiastic support.


packaging

One final, seemingly superficial but in practice rather significant point to be made in the matter of Darwin’s appeal to the secularist demographic is that his Origin will have made agreeable and accessible reading for this group of politically but not biologically informed individuals.9 It is attractively presented in a volume free of scientific jargon, and has often been lauded as the last specialist work fully intelligible to the man or woman in the street. It also comes laced with just the right amount of gentlemanly hesitancy to endear it to a British audience which might have been deterred by a showier or overly “intellectual” mode of presentation. Surely few other writers would have been minded to flag up their reservations about their own material in the same manner as Darwin who remarkably devoted a small chapter to “Difficulties on (sic) the Theory.”


By implicitly disclaiming airs of omniscience, Darwin avoided the vice most disliked by English readers: that of trying to appear “too clever.”10 As one 20th-century intellectual historian observed apropos of this strange national quirk, “People were as delighted with Darwin’s apparent lack of cleverness in youth as they were in 1940 with Mr. Churchill’s inability to learn Latin verbs at Harrow.”11 In reality, as we know, Darwin’s claims to discovery were every bit as trenchant as the literary detections shortly to be achieved by the abrasive fictional figure of Sherlock Holmes. However, to continue the Conan Doyle analogy, Darwin’s method of “packaging” those claims seemed more like an adumbration of the self-effacing manner later to be adopted by Holmes’s fictional foil, Dr. Watson. His readers will doubtless have concluded that the author of the Origin must surely be a regular sort of guy — “one of us” deep down despite his being, from their perspective, a toff. 


References

Larsen, Crisis of Doubt, p. 242.

Larsen, Crisis of Doubt, p. 243.

Larsen, Crisis of Doubt, p. 242.

Aldous Huxley, The Perennial Philosophy [1944] (New York: HarperCollins, 2009) contains a valuable later essay written by Huxley in the 1950s (Appendix, pp. 6-22) with an account of a spiritual odyssey not dissimilar to that of some of the “reconverts” considered above.

Julia Briggs, Night Visitors: The Rise and Fall of the English Ghost Story (London: Faber and Faber, 1977), p. 17.

Basil Willey, More Nineteenth-Century Studies: A Group of Honest Doubters (London: Chatto and Windus, 1963). A. O. J. Cockshut, The Unbelievers: English Agnostic Thought 1840-1890 (London: Collins, 1964); A. N Wilson, God’s Funeral (London: John Murray, 1999).

Larsen, Crisis of Doubt, p. 253.

Alvar Ellegård, Darwin and the General Reader: The Reception of Darwin’s Theory of Evolution in the British Periodical Press 1859-72[1958] (repr. Chicago: Chicago UP, 1990).

People in all ages have recognized how important it is for a writer to win the good will of his or her audience. Medieval rhetoricians had a ready formulation of this PR tactic in the term captatio benevolentiae (getting one’s readers on side).

Darwin’s diffidence was real enough, as is evidenced in the no fewer than five emended editions of the Origin which followed in quick succession in the decade following the first edition of 1859 in which he was able to interpolate his responses to critical objections to his work.

A. O. J. Cockshut, The Unbelievers, p. 176.


Finetuning: Multiple dice or a singular intellect?

 Generic Intelligent Design, the Multiverse, or One God?


On a new episode of ID the Future, Stephen Meyer takes a close look at the case not only for intelligent design, but also for a designer of the cosmos who is immaterial, eternal, transcendent, and involved. Meyer draws on evidence for design at the origin of life, in the origin of plants and animals, and from the fine-tuning of the laws and constants of chemistry and the initial conditions of the universe. He connects all this to the scientific evidence that the universe is not eternal but had a beginning — the Big Bang.

What about the main materialistic alternative for explaining this suite of evidence — the idea that there is a multiverse with our universe just being one of the lucky universes with just the right conditions to allow for advanced life? In step-by-step fashion, Meyer examines the multiverse theory and why it fails to explain away the insistent evidence of a cosmic designer. Download the podcast or listen to it here.

Tuesday, 19 July 2022

Chance : Darwin's God of the gaps?

The Art of Concealment: Darwin and Chance

Neil Thomas
 

For the first decades of Victoria’s reign, any scientific theory dependent on the postulation of chance would by definition have condemned itself as being oxymoronic and as an irredeemable contradiction in terms. The common opinion of the leading men of science in the first half of the 19th century tended towards an unobtrusive form of deism, for educated opinion by the 1830s had become comfortable with a remote God acting indirectly in nature through designed laws. Against the backdrop of that comfortable consensus, Darwin’s unheralded announcement of a process of “natural selection” working on random variations/mutations to create the whole panoply of terrestrial life must have come as a very counterintuitive claim indeed. Victorians certainly found it difficult to envisage the supremely intricate organic order having emerged from a process so heavily dependent on chance, for they will have noted in Darwin’s exposition that natural selection must necessarily always depend for its operations on prior, chance variations having already occurred. 

Darwin’s Desperate Idea

The idea that purely random variations lay at the root of a process that subsequently gave rise to design (either real or, as is habitually alleged, “apparent”) was so sharply opposed to mainstream scientific thinking that it is unsurprising that eminent figures such as William Whewell and Sir John Herschel immediately rejected the idea of chance playing any causative role. Darwin therefore knew that he would have an uphill battle to convince people of the key role chance played in his theory, a fear amply confirmed by reviews of the first edition of his Origin of Species in late 1859. His British nemesis, St. George Mivart, and many others now proceeded to criticize Darwin’s dependence on what Mivart termed “mere fortuity.”1

How then could Darwin get an idea offensive to accepted scientific tenets under the wire and into the safe space of public acceptance or at least acquiescence? Desperate, or perhaps more accurately, cunning measures seemed to be called for, as Curtis Johnson makes clear in an exceptionally close look at Darwin’s private notebooks and letters on the subject of chance. These writings reveal that Darwin — once bitten, twice shy, so to speak — became now increasingly concerned to “massage” his material rather than lay it out in a neutral and disinterested way for all subsequent editions of the Origin (of which there were five).2 Collectively, Darwin’s modifications to the way he presented his material were in effect to become part of an activist campaign in the interests of promoting his ideas.

A Cunning Plan

Mivart’s criticism had served to forewarn and forearm Darwin. Thus from 1860 onwards he “adapted a variety of rhetorical strategies that added up to a deliberate campaign to retain chance as a central element while making it appear to most readers that he did not.”3 In other words, Darwin became steadily convinced of the necessity to insinuate his dangerous idea into the consciousness of his peers by any such means of verbal dexterity he was capable of devising. In short, he felt he must smuggle his idea into Victorian England by somehow contriving to bypass his peers’ critical antennae in a subtle (and arguably somewhat unscrupulous) campaign of trompe-l’oeil.

The Darwin who had once termed himself a “master wriggler” (verbally) would now double down on those of his expository arts which a recent biographer, A. N. Wilson, rightly termed “slithery.”4 Accordingly, from this point forward he strove to downplay the idea of chance for all readers of later editions of Origin, and by the fifth edition references to chance or accident had almost disappeared even though they were integral to his theory. To cover his tracks he now introduced the deliberately vague euphemisms of “spontaneous generation” and “laws of growth” (although one may doubt that the ploy could have been effective with more discerning readers capable of seeing through such semantic legerdemain).5 In this way he hoped that the criticism of his theory from “chance” might go away with the expurgation of the word, a hope that his then comrade-in-arms Alfred Russel Wallace appeared to share when he advised Darwin to delete the word “accident” and replace it with some such bland circumlocution as “variations of every kind are always occurring in every part of every species.”6 That these careful locutions were to become part and parcel of a studied policy of obfuscation is confirmed by further reference to his notebooks where Johnson records how Darwin would for instance confess to glossing over his true views on religion to the larger public and abstain from using the term “materialism” with approbation — even whilst privately admitting that this term described his own beliefs most accurately.7

Giving the Game Away 

Darwin’s least successful ploy to get his readers on board was the habit emerging in his correspondence, and later finding expression in The Variation of Animals and Plants under Domestication (1868), of glossing natural selection by introducing the metaphor of an architect. This was in the mistaken thought that he would make natural selection clearer to his readers. However, as Sir Charles Lyell warned him, the architect metaphor worked at cross-purposes with his messaging intentions since an architect is manifestly intelligent, in contradistinction to natural selection. To make matters worse, the image had been employed for centuries (in such locutions as a “cosmic architect”) to refer to the very deity Darwin wished to exclude.8

What is telling about the unfortunate choice of the architect image is that Darwin’s metaphors are sometimes more eloquent of what their author was really thinking than his formal statements, an issue I have discussed before (here and here). The ostensible take-away message from his writings foregrounded the “dangerous idea” of the purely chance origin and evolution of life on this planet. On that reading, God had been shown the door as being superfluous to proceedings said to be unfolding autonomously. Yet Johnson observes that in notes not for public consumption, Darwin asked himself, “Do these views make me an atheist?”, whereupon he responds with a vehement “NO”! In later notes he describes himself variously as theistic or agnostic (he was ever a Hamlet figure!). Both terms, though, “preserve the possibility, even the likelihood, of a Creator who designed a world in the beginning that would operate in definite and predictable ways.”9 This would imply that Darwin in his heart may in reality have been tempted to return to the status quo ante, that is, to the common deistic prepossessions of the scientific community in the first half of the 19th century. Apparently his still small voice was apt to whisper to him that his life’s work had been built on a foundation of false assumptions — which would account for some of his more tormented lucubrations in the decade preceding his death, especially those concerning his riven attitude to the Christian faith in which he had been reared.

Of Algorithms and Waving Marble Statues

But where Darwin was moved to harbor honest doubts about the role of chance in evolution, many of his 20th-century legatees have shown themselves remarkably free of such reservations. Here is Daniel Dennett expounding with unruffled finality what he terms his “algorithmic” ideas about natural selection:

Can the biosphere really be the outcome of nothing but a cascade of algorithmic processes feeding on chance? And if so, who designed that cascade? Nobody. It is itself the product of a blind algorithmic process.10

An equally remarkable computation of the power of chance can be found in Richard Dawkins’s The Blind Watchmaker where, in the context of assessing whether certain phenomena might be adjudged impossible or merely improbable, Dawkins seriously moots the possibility (albeit remote) of a marble statue moving its arm: 

In the case of the marble statue, molecules in solid marble are continuously jostling against one another in random directions. The jostlings of the different molecules cancel one another out, so the whole hand of the statue stays still. But if, by sheer coincidence, all the molecules just happened to move in the same direction at the same moment, the hand would move. If they then all reversed direction at the same moment the hand would move back. In this way it is possible for a marble statue to wave at us. It could happen.11

I confess that on first reading that paragraph I did not know whether to laugh out loud or question my own sanity. The latter worry was in fact only finally allayed when I came across the volume entitled Answering the New Atheism which allowed me to discover with some relief that it “was not just me.” The two authors of that volume, Scott Hahn and Benjamin Wiker, also quote the same paragraph because, as they explain, “if we merely reported it, no sane person would believe that Dawkins had written it.” They continue:

Our concern for now is whether Dawkins’ unconquerable faith in the powers of chance is rational. For Dawkins, whatever God could do, chance could do better, and that means that any event, no matter how seemingly miraculous, can be explained as good luck…. And if such impossible things are possible, why isn’t it possible that it was indeed a miraculous occurrence? Why isn’t the miraculous itself a possibility.12

Wiker and Hahn are to be commended for pointing out what the professional reviewers of Dawkins’s volume made no mention of. One can only suppose that that group’s uncritical genuflection was motivated partly by materialist confirmation bias and partly by a form of intellectual doffing of the cap to an Oxbridge grandee. Effectively it is as if the reviewers had been caught in the headlights of a car which froze their critical faculties and rendered them incapable of delivering an honest verdict on what appears to me to be the sheer illogic behind such a candidly professed faith in the purportedly limitless powers of chance. 

It would of course be possible to argue that both chance and the postulation of an intelligent designer are equally unverifiable, or “unfalsifiable” in Karl Popper’s term. Darwin is sometimes said to have merely replaced one form of unknown and unknowable with another form of the same since neither God nor chance is falsifiable. However, the contention that an appeal to an inscrutable divine source is as much an admission of ignorance as is an appeal to chance is surely open to serious question (even by the present writer who has been a secular humanist all his adult life). By which I mean that the postulation of chance as a predictable agency capable of producing the lawful regularities of the organic world contradicts our empirical observations of what is possible, whereas the creator hypothesis confirms our common experience of a result requiring a cause (ex nihilo nihil fit / “nothing can arise from nothing”). There can be no question of a probabilistic equivalence between the two options. God, like chance, may be beyond explanatory reach but, unlike chance, does not lie beyond logical reach. The inference to a theistic explanation certainly possesses more logical coherence than does the alternative.

Historical Perspectives

In order to judge whether deliberations on any given issue of human concern may be sensible or otherwise, it is conventional, even something of a cliché, to ask the question, “How would a Martian react to all this?” That is of course a truly imponderable question but what is not imponderable is how our ancestors in earlier epochs reacted to the same kind of debates in their own time and place because we have tangible evidence to show how they felt and thought. So, for instance luck or chance in her personified form as a female goddess revolving her infamous wheel (rota Fortunae) was often portrayed in the iconography of the medieval world as the most fickle of the divinities of the Classical pantheon, on a par with the two-faced god, Janus. Not for nothing did Geoffrey Chaucer write in his 14th-century Knight’s Tale of “Fortune and hire false wheel, / That noon estat assureth to be weel” (“Lady Luck and her untrustworthy wheel which guarantees nobody’s good fortune”) — a quotation which has stayed with me since school days since it expresses a bitter truth we have surely all been obliged to taste, and doubtless on many more occasions than we would have preferred. 

It is entirely appropriate that Fortuna’s emblematic representation with her ubiquitous wheel should have gone on to become the prototype of the modern roulette wheel. But even though she was apostrophized in the Carmina Burana as Fortuna Imperatrix Mundi (“Fortune the world’s empress”) she was never described as Fortuna Creatrix Mundi (“Fortune the creator of the world”). To be asked to believe that the biological equivalent of the supremely untrustworthy Lady Luck was in good part responsible for the evolution of all organic life is a very big ask and, I fancy, an idea which our medieval predecessors (not to mention current players of the National Lottery) would likely have laughed out of court. 

Notes

  1. St. George Jackson’s On the Genesis of Species (New York: Appleton, 1871) was a studied riposte to Darwin.
  2. Curtis Johnson, Darwin’s Dice: The Idea of Chance in the Thought of Charles Darwin (Oxford: OUP, 2015).
  3. Johnson, Darwin’s Dice, p. xvii.
  4. A. N. Wilson, Charles Darwin: Victorian Mythmaker (London: John Murray, 2017).
  5. Johnson notes that Darwin’s “dangerous idea” concerning chance was in later years given full expression in his “Old and Useless Notes” — indicating if proof were needed that his use of circumlocution in the later editions of Origin was indeed an obfuscatory ploy. See Darwin’s Dice, p. 226.
  6. Darwin’s Dice, pp. 138 and 156, note 12.
  7. Darwin’s Dice, p. 114, note 11
  8. See discussion of this point in Darwin’s Dice, pp. 136-143.
  9. Johnson, Darwin’s Dice, p.227.
  10. Dennett, Darwin’s Dangerous Idea: Evolution and the Meanings of Life (London: Allen Lane, 1995), p. 59.
  11. The Blind Watchmaker (London: Penguin, 1986), pp. 159-60.
  12. Scott Hahn and Benjamin Wiker, Answering the New Atheism: Dismantling Dawkins’ Case against God (Ohio: Emmaus Road, 2008), pp. 11-13.

 

Monday, 18 July 2022

Big brained/bird brained: Same difference.

 Brain Size Doesn’t Determine Intelligence

Evolution News @DiscoveryCSC


As biologist John Timmer notes at Ars Technica, some life forms appear much more intelligent than others despite having brains of roughly the same size:


Animals with very different brains from ours — a species of octopus and various birds — engage with tools, to give just one example. It seems intuitive that a brain needs a certain level of size and sophistication to enable intelligence. But figuring out why some species seem to have intelligence while closely related ones don’t has proven difficult — so difficult that we don’t really understand it.


JOHN TIMMER, “BRAIN SIZE VS. BODY SIZE AND THE ROOTS OF INTELLIGENCE” AT ARS TECHNICA (JULY 12, 2022)

As he points out, some things we might expect to be true — puzzlingly — aren’t:


One of the simplest ideas has been that size is everything: have a big enough brain, and you at least have the potential to be smart. But lots of birds seem to be quite intelligent despite small brains—possibly because they cram more neurons into a given volume than other species. Some researchers favor the idea that intelligence comes out of having a large brain relative to your body size, but the evidence there is a bit mixed.


JOHN TIMMER, “BRAIN SIZE VS. BODY SIZE AND THE ROOTS OF INTELLIGENCE” AT ARS TECHNICA (JULY 12, 2022)

Not only that but lemurs, whose brains are 1/200th the size of chimpanzee brains, passed the same IQ test. And some life forms behave in an apparently intelligent way with no brain.


Ready to Retire

Seven years ago, London School of Economics psychology professor Nicholas Humphrey, responding to a question at Edge, “What scientific idea is ready for retirement?”, responded “the bigger an animal’s brain, the greater its intelligence.” He elaborated, admitting he had been wrong about this in the past:


In particular, you’ll find the idea repeated in every modern textbook that the brain size of different primate species is causally related to their social intelligence. I admit I’m partly responsible for this, having championed the idea back in the 1970’s. Yet, for a good many years now, I’ve had a hunch that the idea is wrong.


There are too many awkward facts that don’t fit in. For a start, we know that modern humans can be born with only two thirds the normal volume of brain tissue, and show next to no cognitive deficit as adults. We know that, during normal human brain development, the brain actually shrinks as cognitive performance improves (a notable example being changes in the “social brain” during adolescence, where the cortical grey matter decreases in volume by about 15% between age 10 and 20). And most surprising of all, we know that there are nonhuman animals, such as honey bees or parrots, that can emulate many feats of human intelligence with brains that are only a millionth (bee) or a thousandth (parrot) the size of a human’s.


Biochemist Michael Denton offers some insights in The Miracle of Man (2022). Although whales (10 kg) and elephants (6 kg) have the biggest brains, primates and monkeys have a much higher than expected number of cortical neurons relative to brain size. Humans, not surprisingly, have the highest information processing capacity of any life form.


What About Making Humans Smarter?

Neuroscience researcher Michel Hofman describes the human brain as “one of the most complex and efficient structures in the animated universe.” Denton, noting that a cubic millimetre of human brain features sixty times as many synaptic connections as a 747 jetliner has components, goes on to say,


Many authors have concluded that it may be very nearly the most intelligent/ advanced biological brain possible. That is, its information-processing capacity may be close to the maximum of any brain built on biological principles, made of neurons, axons, synapses, dendrites, etc., and nourished by glial cells and provided with oxygen via circulation. For example, Peter Cochrane and his colleagues, in a widely cited paper, conclude “that the brain of Homo sapiens is within 10–20% of its absolute maximum before we suffer anatomical and/ or mental deficiencies and disabilities. We can also conclude that the gains from any future drug enhancements and/ or genetic modification will be minimal.” Hofman concurs: “We are beginning to understand the geometric, biophysical, and energy constraints that have governed the evolution of these neuronal networks. In this review, some of the design principles and operational modes will be explored that underlie the information processing capacity of the cerebral cortex in primates, and it will be argued that with the evolution of the human brain we have nearly reached the limits of biological intelligence.”


If Hofman, Cochrane and colleagues, and Denton are right, recent proposals to produce human superintelligence “within a decade” via genetic engineering are doomed.


Read the rest at Mind Matters News, published by Discovery Institute’s Bradley Center for Natural and Artificial Intelligence.

The sincerest form of flattery?

 If Nanomotors Are Designed, Why Not Biomotors?

David Coppedge


If engineers know how much effort goes into making an object spin that is just a few nanometers wide, one would think they would stand in awe of biomolecular machines that do much more — machines that perform functions and are linked into signal transduction pathways and can reproduce themselves. Wouldn’t it be a refreshing change to have them admit that biological motors look intelligently designed?


Watch the new nanomotor built by engineers at the University of Texas at Austin. It goes around, and around… and around. 


The new motor is less than 100 nanometers wide, and it can rotate on a solid substrate under light illumination. It can serve as a fuel-free and gear-free engine to convert light into mechanical energy for various solid-state micro-/nano-electro-mechanical systems. [Emphasis added.]


ATP synthase, though, is almost an order of magnitude smaller, and it does much more than rotate. It turns a crankshaft that builds three ATP molecules per revolution, running on protons. It is anchored to the mitochondrial membrane in animals and the chloroplast membrane in plants. A plant’s “fuel free” nanomachines run on light, too. And Brownian motion doesn’t slow it down, like the UT engineers had to worry about. 


“Nanomotors help us to precisely control the nanoworld and make up new things we want for our real world.” said Jingang Li, a PhD graduate from Zheng’s group and the lead author of this study.


Biological machines are part of the real world, aren’t they? Is Dr Jingang Li aware that trillions of rotary engines are spinning in his own body as he speaks? The publicist does give a little credit to biology:


The reason scientists are so enamored with creating these tiny motors is because they mimic some of the most important biological structures. In nature, these motors drive the division of cells and help them move. They combine to help organisms move.


OK. But earlier, Jinang’s associate professor said, “Life started in the water and eventually moved on land” — presumably all by itself. If the UT team really wanted to mimic biological machines, why not toss some chemical elements in water and wait for a few billion years for it to move on land?


Cheap Imitation

In New Scientist, a reporter boasts that a “Tiny nanoturbine is an autonomous machine smaller than most bacteria.” Credit is given to a rotating enzyme (presumably ATP synthase) for the inspiration:


Cees Dekker at Delft University of Technology in the Netherlands and his colleagues created the turbine after being inspired by a rotating enzyme that helps catalyse energy-storing molecules in our cells. They wanted to build a molecular machine that could similarly do work, like adding energy to biological processes or moving other molecules, without having to be repeatedly pushed or manipulated in some way.


Their little nanoturbine, just 25 nm in diameter can extract energy from salt water and rotate at 10 rpm. The article doesn’t mention that ATP synthase is half that size and runs at up to 6,000 rpm, without the problems of random thermal fluctuations that make their nanoturbine difficult to control.


“This is not that different than an engine you have in your car,” says Dekker. “You put in gasoline, you get mechanical work. With the nanoturbine, you add the salt mixture, you get mechanical work, namely rotations.” The researchers also found that they could power the turbine by exposing it to electric voltage or having flowing water turn it much like wind turns a windmill.


These structural chemists surely know that cars are intelligently designed. Why is there hesitancy to say that superior engineering design is found in the biological motors that inspired them?


Better than Nature?

News from the University of California, Riverside, claims to have bested nature. Scientists there say that they have built “artificial photosynthesis” that could be much more efficient at improving crop yields than biological photosynthesis: 


Photosynthesis has evolved in plants for millions of years to turn water, carbon dioxide, and the energy from sunlight into plant biomass and the foods we eat. This process, however, is very inefficient, with only about 1% of the energy found in sunlight ending up in the plant. Scientists at UC Riverside and the University of Delaware have found a way to bypass the need for biological photosynthesis altogether and create food independent of sunlight by using artificial photosynthesis.


The way that’s worded, it sounds like they just stumbled on “a way” to improve on nature. A look at the Methods section of their paper in Nature Foods, though, shows a highly intricate procedure for preparing the setup: anodes, cathodes, flow electrolyzer, and other parts using multiple elements in precisely arranged ways. Even so, their system only makes acetate (C2H3O2) — a relatively simple compound — nothing like the complex carbohydrates made by plants. If certain plants can use acetate to grow their complex molecules without photosynthesis, fine; but that’s a far cry from what plants do on their own. 


The researchers admit their device was “engineered.” It may find application in places where crop plants are hard to grow, such as on a spacecraft. But growing the food will require the elaborate biochemistry in plant machines; it will not work on electrolysis alone. To fulfill their boast, now let the engineers code molecules that will build their devices from soil and deliver acetate to food plants automatically and in the right proportions. Then let them engineer a way to package the code in seeds.


Information Please

Researchers at the Howard Hughes Medical Institute have created a “DNA Typewriter” that “taps out a record inside cells.” It allows them to store messages in DNA code.


While developing a new system for recording within cells, geneticist Jay Shendure and his team decided to give it a test run by using it to encode text. Since their invention relied on a nearly brand-new recording medium, DNA, they wanted to use messages that evoked a sense of historical significance.


Two choices were obvious: “What hath God wrought?,” a Biblical quote used by Samuel Morse in the first long-distance telegraph transmission, and the more mundane, “Mr. Watson, come here!” spoken by Alexander Graham Bell to his assistant in the first telephone call.


A line from Dickens was also considered, but a Korean member won by using a line from a K-Pop song. The team hopes to use their technique to genetically engineer cells that can barcode individual cells and store records of the cell’s activity. But doesn’t the sequence of letters in natural DNA qualify as a text?


Rightly So

In each of these examples, molecular engineers showed great pride in their achievements, and rightly so. They considered how they might be used for the good of mankind. Not one of them made the most logical inference, though, that the very biological cells that inspired their work were also engineered by design. Maybe some day soon they will not be ashamed to say so. Many great scientists used to proclaim that without hesitation.


Saturday, 16 July 2022

The war in science?

 The Gollum Effect in Science, from Tycho Brahe to Today

Evolution News @DiscoveryCSC


On a new episode of ID the Future, host Andrew McDiarmid sits down with historian and philosopher of science Michael Keas to discuss a recent article at Times Higher Education, “My Precious! How Academia’s Gollums Guard Their Research Fields.” The article looks at how scientific progress is being impeded by a culture in which scientists jealously guard their research instead of sharing it. Keas says the problem seems to have gotten worse in recent years but isn’t a new one. He illustrates with the story of Tycho Brahe and Johannes Kepler.


Brahe, a 16th-century Danish astronomer, sat on his astronomical research for years, rather than sharing it with Johannes Kepler, his assistant. Kepler only got hold of it when Brahe died unexpectedly shortly after a banquet. The rumor began that perhaps Brahe had been poisoned to free up access to his research, data that eventually allowed Kepler to make his revolutionary breakthrough, his three laws of planetary motion that cinched the case for a sun-centered model of the universe.


Keas explains what a later autopsy revealed about Brahe’s cause of death. And he discusses some modern-day power plays involving evolutionists jealously guarding the Darwinian paradigm against those who would challenge it. Finally, Keas enumerates some of the virtues that can help further the progress of science, including generosity and a humble willingness to listen to criticism.


Download the podcast or listen to it here. For more surprising facts from the history of science, check out Keas’s recent book, Unbelievable: 7 Myths About the History and Future of Science and Religion.