Search This Blog

Tuesday, 30 January 2024

ID has made Darwin Skepticism less unfashionable?

 Dembski and Ruse Look Back on 20 Years of Debate — And a Special Anniversary


In 2004 Cambridge University Press published Debating Design: From Darwin to DNA, edited by William Dembski and Michael Ruse, a brilliant landmark anthology showcasing the vibrancy of the debate between intelligent design and evolution. Contributors included Angus Menuge, Kenneth Miller, Elliott Sober, Robert Pennock, Stuart Kauffman, Paul Davies, John Polkinghorne, Richard Swinburne, Walter Bradley, and Stephen Meyer. In 2024, it seemed worthwhile to look back at the two intervening decades and see how the debate has developed. So with great pleasure I invited Dembski and Ruse for a conversation on my podcast: When Debating Design was published, an ambiguity hung over it. Was this the beginning of a new chapter for ID? Or was it a swansong? Critics believed there were good reasons to think ID would peter out. The New Atheism — new at the time, in the years immediately following 9/11 which called it into being — was on the rise, enjoying far more popularity than ID did. People read New Atheist books and came away feeling courageous and victorious. Yet the New Atheism has since turned passé. Nothing guaranteed that ID would not suffer the same fate. 

Intelligent Design in Two Senses

Twenty years later, ID is still here. How did it persevere? The secret, I think, has to something to do with ID being the flipside of discontent with Darwinian orthodoxy. Design can be thought of in two senses: a strict one, and an expansive one. The strict sense is design as advocated by ID proponents. It is the positive case for ID. Most atheist-leaning scientists remain averse to this. The expansive sense is design as a critique of current evolutionary theory, with the latter’s difficulties in explaining features of biology. That explanatory weakness is not, per se, evidence of design, but it does cause one to wonder about the possibility of intelligent design. Many scientists acknowledge the shortcomings of current neo-Darwinian theory. These two senses of design were discernible in Debating Design. 

Debating Design gathered essays arguing respectively for four different viewpoints: Darwinism, Complex Self-Organization, Theistic Evolution, and Intelligent Design. As presented in the book, the ID viewpoint argued forcefully for design in nature. It is also true that out of the four viewpoints represented, three recognized that Darwinism by itself was an insufficient explanation for the sublime complexities in life. Seen this way, design in its expansive sense, reflected in a mass of pointed, well-reasoned criticisms, dwarfs Darwinism. By leveraging design in its expansive sense, ID proper provided a platform for scientific discussion. ID then became a viable option by championing the strict sense of design. Critics of ID who are nonetheless skeptical of Darwinism can be read as agreeing with ID that in principle evolutionary theory is not the end-all explanation for biological complexity. 

Growing Doubt About Darwin

Since 2004, doubts about neo-Darwinian mechanisms have only grown. In 2014, Laland, Uller, Feldman, et al.published in Nature an influential article calling for an urgent rethink of evolutionary theory. They wrote: 

The number of biologists calling for change in how evolution is conceptualized is growing rapidly. Strong support comes from allied disciplines, particularly developmental biology, but also genomics, epigenetics, ecology and social science. We contend that evolutionary biology needs revision if it is to benefit fully from these other disciplines. The data supporting our position gets stronger every day.

Yet the mere mention of the EES often evokes an emotional, even hostile, reaction among evolutionary biologists. Too often, vital discussions descend into acrimony, with accusations of muddle or misrepresentation. Perhaps haunted by the spectre of intelligent design, evolutionary biologists wish to show a united front to those hostile to science. Some might fear that they will receive less funding and recognition if outsiders — such as physiologists or developmental biologists — flood into their field.

…This is no storm in an academic tearoom, it is a struggle for the very soul of the discipline.

LALAND, K., ULLER, T., FELDMAN, M. ET AL. DOES EVOLUTIONARY THEORY NEED A RETHINK? NATURE 514, 161–164 (2014)

The protest about the “spectre of intelligent design” was telling. When critics start talking that way, as if looking over their shoulder, you can’t help wondering if the ID program is onto something. Darwinian mechanisms, as they stand today, are widely recognized as out of tune with the latest scientific discoveries in a variety of fields. This lends credence to ID, in its expansive sense, as the passage above grudgingly let’s slip. The ID community can take it as a matter of pride that from the very beginning, it has consistently been pointing out the inadequacies of evolutionary thought. Contrast this with recent fledgling viewpoints popping up in the scientific community where weaknesses in the evolutionary narrative are belatedly recognized under the euphemistic terms of “puzzles” rather than “problems,” in need of “revisions” rather than a brand new perspective. It is this trend that has encouraged a certain conciliatory quality in the debate. 

Eschewing Polarization

As I listened to Dembski and Ruse, what struck me most is how their views today eschew the bitter polarization that characterized earlier discussions of ID. In the conversation, Ruse acknowledged that contemporary science has not explained how molecules could have led to mind. The mind-body problem is a fundamental question that science cannot answer. Ruse also explicitly distanced himself from the crude materialism championed by Daniel Dennett, one of the New Atheism’s Four Horsemen. While crucial differences between them remain, as Dembski aptly highlighted, overall the discussion was characterized by amiable restraint, cordiality, and even chumminess. Moreover, Dembski is a Christian, Ruse is an agnostic, and I am a Muslim, and it is significant that all three of us can take ID seriously as a topic deserving of critical engagement rather than dismissive caricaturing. It can be hoped that ID will continue making inroads in the evolution debate. Grand victories are not needed. Slow-but-steady will be far more productive than a quick-dash attempt to the finish line.

Any biological theory that cannot adequately explain the “appearance” (as Richard Dawkins puts it) of design cannot adequately explain life. ID positions itself as explaining this appearance — seen as actual, not illusory — while allowing ample space for competing non-ID theories to air their dissatisfaction with Darwinian orthodoxy. The stricter sense of design can thus be defended as a more reasonable, and a more intuitively straight-forward option amongst this plurality of the dissatisfied. Using this strategy, the ID community can hope to bolster its credibility and contribute to a greater push for a paradigm shift in origins science. 

The continuation of ID across two decades where New Atheism failed shows that ID is closer to the scientific enterprise than the New Atheism ever was. The debate is not over. But as of now, it is clear that ID is heading in the right direction. Theodosius Dobzhansky’s famous saying, “Nothing in biology makes sense except in the light of evolution,” has gradually evolved into “Nothing in biology makes sense except in the light of design.” 

Why we wait in vain for the arrival of our AI overlords


Artificial General Intelligence: The Poverty of the Stimulus


In this series so far at Evolution News about Artificial General Intelligence, my references to AGI worshippers and idolaters will be off-putting to those who think the claim that AGI will someday arrive, whatever its ultimate ETA, is an intellectually credible and compelling position. Accordingly, I’m just being insulting by using pejorative religious language to describe AGI’s supporters, to say nothing of being a Luddite for not cheering on AGI’s ultimate triumph. I want therefore to spend some space here indicating why AGI does not deserve to be taken seriously
       Let’s begin with a point on which the linguist Noam Chomsky built his career, which may be encapsulated in the phrase “the poverty of the stimulus.” His point with this phrase was that humans learn language with a minimum of input, and thus must be endowed with an in-built capacity (“hardwired”) to acquire and use language. Infants see and hear adults talk and pick up language easily and naturally. It doesn’t matter if the caregivers pay special attention to the infant and provide extra stimulation so that their child can be a “baby Einstein.” It doesn’t matter if the caregivers are neglectful or even abusive. It doesn’t even matter if the child is blind, deaf, or both. Barring developmental disorders (such as some forms of autism), the child can learn language.

But It’s Not Just the Ability to Learn Language

“The poverty of the stimulus” underscores that humans do so much more with so much less than would be expected unless humans have an innate ability to learn language with minimal inputs. And it’s not just that we learn language. We gain knowledge of the world, which we express through language. Our language is especially geared to express knowledge about an external reality. This “aboutness” of the propositions we express with language is remarkable, especially on the materialist and mechanistic grounds so widely accepted by AGI’s supporters. 

As G. K. Chesterton noted in his book Orthodoxy, we have on materialist grounds no right “to assert that our thoughts have any relation to reality at all.” Matter has no way to guarantee that when matter thinks (if it can think), it will tell us true things about matter. On Darwinian materialist grounds, all we need is differential reproduction and survival. A good delusion that gets us to survive and reproduce is enough. Knowledge of truth is unnecessary and perhaps even undesirable. 

The philosopher Willard Quine, who was a materialist, made essentially the same point in what he called “the indeterminacy of translation.” Quine’s thesis was that translation, meaning, and reference are all indeterminate, implying that there are always valid alternative translations of a given sentence. Quine presented a thought experiment to illustrate this indeterminacy. In it, a linguist tries to determine the meaning of the word “gavagai,” uttered by a speaker of a yet-unknown language, in response to a rabbit running by. Is the speaker referring to the rabbit, the rabbit running, some rabbit part, or something unrelated to the rabbit? All of these are legitimate possibilities according to Quine and render language fundamentally indeterministic.

Yet such arguments about linguistic indeterminacy are always self-referentially incoherent. When Quine writes of indeterminacy of translation in Word and Object (1960), and thus also embraces the inscrutability of reference, he is assuming that what he is writing on these topics is properly understood one way and not another. And just to be clear, everybody is at some point in the position of a linguist because, in learning our mother tongue, we all start with a yet-unknown language. So Quine is tacitly making Chomsky’s point, which is that with minimal input — which is to say with input that underdetermines how it might be interpreted — we nevertheless have a knack for finding the right interpretation and gaining real knowledge about the world.

Chomsky’s poverty of the stimulus is regarded as controversial by some because an argument can be made that the stimuli that lead to learning, especially language learning, may in fact be adequate without having to assume a massive contribution of innate capabilities. Chomsky came up with this notion in the debate over behaviorism, which needed to characterize all human capacities as a result of stimulus-response learning. Language, according to the behaviorists, was thus characterized as verbal behavior elicited through various reinforcement schedules of rewarded and discouraged behaviors. In fact, Chomsky made a name for himself in the 1950s by reviewing B. F. Skinner’s book Verbal Behavior. That review is justly famous for demolishing behaviorist approaches to language (the field never recovered after Chomsky’s demolition). 

If Chomsky Is Right

But suppose we admit that the controversy about whether the stimuli by which humans learn language has yet to be fully resolved. If Chomsky is right, those stimuli are in some sense impoverished. If his critics are right, they are adequate without needing to invoke extraordinary innate capacities. Yet if we leave aside the debate between Chomsky’s nativism and Skinner’s behaviorism, it’s nonetheless the case that such stimuli are vastly smaller in number than what artificial neural nets need to achieve human-level competence. 

Consider LLMs, large language models, which are currently the rage, and of which ChatGPT is the best known and most widely used. ChatGPT4 uses 1.76 trillion parameters and its training set is based on hundreds of billions of words (perhaps a lot more, but that was the best lower-bound estimate I was able to find). Obviously, individual humans gain their language facility with nowhere near this scale of inputs. If a human child were able to process 200 words per minute and did so continuously, then by the age of ten the child would have processed 200 x 60 x 24 x 365 x 10, or roughly a billion, words. Of course, this is a vast overestimate of the child’s language exposure, ignoring sleep, repetitions, and lulls in conversation. 

Or consider Tesla, which since 2015 has been promising fully autonomous vehicles as just on the horizon. Full autonomy keeps eluding the grasp of Tesla engineers, though the word on the street is that self-driving is getting better and better (as with a reported self-driving taxi in San Francisco, albeit by Waymo rather than Tesla). But consider: To aid in developing autonomous driving, Tesla processes 160 billion video frames each day from the cameras on its vehicles. This massive amount of data, used to train the neural network to achieve full self-driving, is obviously many orders of magnitude beyond what humans require to learn to drive effectively.

Erik Larson’s book The Myth of Artificial Intelligence (Harvard, 2021) is appropriately subtitled Why Computers Can’t Think the Way We Do. Whatever machines are doing when they exhibit intelligence comparable to humans, they are doing it in ways vastly different from what humans are doing. In particular, the neural networks in the news today require huge amounts of computing power and huge amounts of input data (generated, no less, from human intelligent behavior). It’s no accident that artificial intelligence’s major strides in recent years fall under Big Tech and Big Data. The “Big” here is far bigger than anything available to individual humans. 

Domain Specificity

The sheer scale of efforts needed to make artificial intelligence impressive suggests human intelligence is fundamentally different from machine intelligence. But reasons to think the two are different don’t stop there. Domain specificity should raise additional doubts about the two being the same. When Elon Musk, for instance, strives to bring about fully autonomous (level 5) driving, it is by building neural nets that every week must sort through a trillion images taken from Tesla automobiles driving in real traffic under human control. Not only is the amount of data to be analyzed staggering, but it is also domain specific, focused entirely on developing self-driving automobiles. 

Indeed, no one thinks that the image data being collected from Tesla automobiles and then analyzed by neural nets to facilitate full self-driving is also going to be used for automatically piloting a helicopter or helping a robot navigate a ski slope, to say nothing of playing chess or composing music. All our efforts in artificial intelligence are highly domain specific. What makes LLMs, and ChatGPT in particular, so impressive is that language is such a general instrument for expressing human intelligence. And yet, even the ability to use language in contextually relevant way based on huge troves of humanly generated data is still domain specific. 

The French philosopher René Descartes, even though he saw animal bodies, including human bodies, as machines, nonetheless thought that the human mind was non-mechanical. Hence, he posited a substance dualism in which a non-material mind interacted with a material body, at the pineal gland no less. How a non-material mind could interact with a material/mechanical body Descartes left unanswered (invoking the pineal gland did nothing to resolve that problem). And yet, Descartes regarded the mind as irreducible to matter/mechanism. As he noted in his Discourse on Method (1637, pt. 5, my translation):

Although machines can do many things as well as or even better than us, they fail in other ways, thereby revealing that they do not act from knowledge but solely from the arrangement of their parts. Intelligence is a universal instrument that can meet all contingencies. Machines, on the other hand, need a specific arrangement for every specific action. In consequence, it’s impossible for machines to exhibit the diversity needed to act effectively in all the contingencies of life as our intelligence enables us to act.

Descartes was here making exactly the point of domain specificity. We can get machines to do specific things — to be wildly successful in a given, well-defined domain. Chess playing is an outstanding example, with computer chess now vastly stronger than human chess (though, interestingly, having such strong chess programs has also vastly improved the quality of human play). But chess programs play chess. They don’t also play Minecraft or Polytopia. Sure, we could create additional artificial intelligence programs that also play Minecraft and Polytopia, and then we could kludge them together with a chess playing program so that we have a single program that plays all three games. But such a kludge offers no insight into how to create an AGI that can learn to play all games, to say nothing of being a general-purpose learner, or what Descartes called “a universal instrument that can meet all contingencies.” Descartes was describing AGI. Yet artificial intelligence in its present form, even given the latest developments, is not even close. 

Elon Musk Appreciates the Problem

He therefore is building Optimus, also known as the Tesla Bot. The goals is for it to become a conceptual general-purpose robotic humanoid. By having to be fully interactive with the same environments and sensory inputs as humans, such a robot could serve as a proof of concept for Descartes’s universal instrument and thus AGI. What if such a robot could understand and speak English, drive a car safely, not just play chess but learn other board games, have facial features capable of expressing what in humans would be appropriate affect, play musical instruments, create sculptures and paintings, do plumbing and electrical work, etc. That would be impressive and take us a long way toward AGI. And yet, Optimus is for now far more modest. For now, the robot is intended to be capable of performing tasks that are “unsafe, repetitive, or boring.” That is a far cry from AGI.

AGI is going to require a revolution in current artificial intelligence research, showing how to overcome domain specificity so that machines can learn novel skills and tasks for which they were not explicitly programed. And just to be clear, reinforcement learning doesn’t meet this challenge. Take AlphaZero, a program developed by DeepMind to play chess, shogi, and Go, which improved its game by playing millions of games against itself using reinforcement learning (which is to say, it rewarded winning and penalized losing). This approach allows the program to learn and improve without ongoing human intervention, leading to significant advances in computer game playing ability. But it depends on the game being neatly represented in the state of a computer, along with clear metrics for what constitutes good and bad play. 

The really challenging work of current artificial intelligence research is taking the messy real world and representing it in domain-specific ways so that the artificial intelligence created can emulate humans at particular tasks. The promise of AGI is somehow to put all these disparate artificial intelligence efforts together, coming up with a unified solution to computationalize all human tasks and capacities in one fell swoop. We have not done this, are nowhere close to doing this, and have no idea of how to approach doing this.

On accurately measuring the invisible.

 

JEHOVAH'S Favorite type of prayer?

 In your brother servant's very fallible an unauthoritative opinion:

2chronicles Ch.1:11,12NIV"God said to Solomon, “Since this is your heart’s desire and you have not asked for wealth, possessions or honor, nor for the death of your enemies, and since you have not asked for a long life but for wisdom and knowledge to govern my people over whom I have made you king, 12therefore wisdom and knowledge will be given you. And I will also give you wealth, possessions and honor, such as no king who was before you ever had and none after you will have.”"

Proverbs Ch.2:3NIV"indeed, if you call out for insight

and cry aloud for understanding,

4and if you look for it as for silver

and search for it as for hidden treasure,

5then you will understand the fear of the Lord

and find the knowledge of God."

Take it from one who knows.

James ch.1:5,6NIV"If any of you lacks wisdom, you should ask God, who gives generously to all without finding fault, and it will be given to you. 6But when you ask, you must believe and not doubt, because the one who doubts is like a wave of the sea, blown and tossed by the wind."