Artificial General Intelligence: Machines vs. Organisms
In this series about Artificial General Intelligence, it may seem that I’m picking too much on Ray Kurzweil. But he and I have been crossing paths for a long time. He and I, over the last few years, have frequented the same Seattle area tech conference, COSM, where we both speak, albeit on opposite sides about the question of artificial intelligence. He and I also took sharply divergent positions on the Stanford campus back in 2003 at the Accelerating Change Conference, a transhumanist event organized by John Smart. Yet our first encounter goes back to 1998, at one of George Gilder’s Telecosm conferences.
From “Intelligent” to “Spiritual”
At Telecosm in 1998, I moderated a discussion where the focus was on Ray Kurzweil’s then forthcoming book, The Age of Spiritual Machines, which at the time was in press. Previously, Kurzweil had written The Age of Intelligent Machines (1990). By substituting “spiritual” for “intelligent,” he was clearly taking an even more radical line about the future of artificial intelligence. In his presentation for the discussion, he described how machines were poised to match and then exceed human cognition, a theme he has hammered on ever since. For Kurzweil, it is inevitable that machines will match and then exceed us: Moore’s Law guarantees that machines will attain the needed computational power to simulate our brains, after which the challenge will be for us to keep pace with machines, a challenge at which he sees us as destined to fail because wetware, in his view, cannot match hardware. Our only recourse to survive successfully will thus be to upload ourselves digitally.
Kurzweil’s respondents at the Telecosm discussion were John Searle, Thomas Ray, and Michael Denton, and they were all to varying degrees critical of his strong AI view, or what we would now call his AGI view. Searle rehearsed his Chinese Room thought experiment to argue that computers don’t/can’t actually understand anything, an argument that remains persuasive and applies to recent chatbots, such as ChatGPT. But the most interesting response to Kurzweil came, in my view, from Denton. He offered an argument about the complexity and richness of individual neurons, pointing out how inadequate our understanding of them is and how even more inadequate our ability is to computationally model them. At the end of the discussion, however, Kurzweil’s confidence in the glowing prospects for strong AI’s (AGI’s) future remained undiminished. And indeed, they remain undiminished to this day. The entire exchange, suitably expanded and elaborated, appeared in Jay Richard’s edited collection Are We Spiritual Machines?
Denton’s Powerful Argument
I want here to focus on Denton’s argument, because it remains relevant and powerful. Kurzweil is a technophile in that he regards building and inventing technology, and above all machines, as the greatest thing humans do. But he’s also a technobigot in that he regards people of the past, who operated with minimal technology, as vastly inferior and less intelligent than we are. He ignores how much such people were able to accomplish through sheer ingenuity given how little they had to work with. He thus minimizes the genius of a Homer, the exploration of the Pacific by South Sea Islanders, or the knowledge of herbs and roots of indigenous peoples captured in oral traditions, etc. For examples of the towering intelligence of non-technological people, I encourage readers to check out Robert Greene’s Mastery.
Taken with the power and prospects of artificial intelligence, Kurzweil thinks that ChatGPT will soon write better prose and poetry than we do. Moreover, by simulating our human bodies, medical science will, according to him, be able to develop new drugs and procedures without having to experiment on our human bodies. He seems unconcerned that such simulations may miss anything crucial about ourselves and thus lead to medical procedures and drugs that backfire, doing more harm than good. Kurzweil offered such blithe assurances about AGI at the 2023 COSM conference.
Whole organisms and even individual cells are nonlinear dynamical systems, and there’s no evidence that computers are able to adequately simulate them. Even single neurons, which for Kurzweil and Marvin Minsky make up a computer made of meat (i.e., the brain), are beyond the simulating powers of any computers we know or can envision. A given neuron will soon enough behave unpredictably and inconsistently with any machine. Central to Denton’s argument against Kurzweil’s strong AI (AGI) view back in 1998 was the primacy of the organism over the machine. Denton’s argument remains persuasive. Rather than paraphrase that argument, I’ll use Denton’s own words (from his essay “Organism and Machine” in Jay Richards, ed., Are We Spiritual Machines: Ray Kurzweil vs. The Critics of Strong A.I.):
Living things possess abilities that are still without any significant analogue in any machine which has yet been constructed. These abilities have been seen since classical times as indicative of a fundamental division between the [organismal] and mechanical modes of being.
To begin with, every living system replicates itself, yet no machine possesses this capacity even to the slightest degree… Every second countless trillions of living systems from bacterial cells to elephants replicate themselves on the surface of our planet. And since life’s origin, endless life forms have effortlessly copied themselves on unimaginable numbers of occasions.
Living things possess the ability to change themselves from one form into another. For instance, during development the descendants of the egg cell transform themselves from undifferentiated unspecialized cells into [widely different cells, some with] long tentacles like miniature medusae some hundred thousand times longer than the main body of the cell…
To grasp just how fantastic [these abilities of living things] are and just how far they transcend anything in the realm of the mechanical, imagine our artifacts endowed with the ability to copy themselves and … “morph” themselves into different forms. Imagine televisions and computers that duplicate themselves effortlessly and which can also “morph” themselves into quite different types of machines [such as into a microwave or helicopter]. We are so familiar with the capabilities of life that we take them for granted, failing to see their truly extraordinary character.
Even the less spectacular self-reorganizing and self-regenerating capacities of living things … should leave the observer awestruck. Phenomena such as … the regeneration of the limb of a newt, the growth of a complete polyp, or a complex protzoan from tiny fragments of the intact animal are … without analogue in the realm of mechanism…
Imagine a jumbo jet, a computer, or indeed any machine ever conceived, from the fantastic star ships of science fiction to the equally fantastic speculations of nanotechnology, being chopped up randomly into small fragments. Then imagine every one of the fragments so produced (no two fragments will ever be the same) assembling itself into a perfect but miniaturized copy of the machine from which it originated — a tiny toy-sized jumbo jet from a random section of the wing — and you have some conception of the self-regenerating capabilities of certain microorganisms… It is an achievement of transcending brilliance, which goes beyond the wildest dreams of mechanism.
Between Organism and Mechanism
The lesson that Denton drew from this sharp divergence between organism and mechanism is that the quest for full Artificial General Intelligence faces profound conceptual and practical challenges. The inherent capacity of living things to replicate, transform, self-organize, and regenerate in ways that transcend purely mechanical processes underscores a fundamental divide between the organic and the artificial.
Organisms demonstrate a level of complexity and adaptability that no machine or artificial system shows any signs of emulating. The extraordinary characteristics of life recounted by Denton suggest that full AGI, capable of the holistic and versatile intelligence seen in living organisms, will remain an elusive goal, if not a practical impossibility. We therefore have no compelling reason to think that the pinnacle of intelligence is poised to shift from the organismal to the artificial, especially given the fantastic capabilities that organisms are known to exhibit and that machines show no signs of ever exhibiting.
At the top of the list of such fantastic capabilities is human consciousness. If AGI is truly going to match and ultimately exceed humans in every respect (if we really are just computational devices, or computers made of meat), then AGI will need to exhibit consciousness. Yet how can consciousness reside in a computational device, which consists of finitely many states, each state being binary, assuming a value of 0 or 1? Consciousness is a reflective awareness of one’s identity, existence, sensations, perceptions, emotions, ethics, valuations, thoughts, and circumstances (Sitz im Leben). But how can the shuffling of zeros and ones produce such a full inner life of self-awareness, subjective experience, and emotional complexity?
This Is Not a New Question
In pre-computer days, it was posed as whether and how a mechanical device composed of material parts could think. The philosopher Gottfried Leibniz raised doubts that such mechanical devices could think at all with his thought experiment of a mill (in his 1714 Monadology). He imagined a giant mill and asked where exactly thought would reside in the workings of its gears and other moving parts. As he saw it, there would be an unbridgeable gap between the mill’s mechanical operation and its ability to think and produce consciousness. He saw this thought experiment as showing that matter could not be converted into mind.
More recently, philosopher John Searle’s “Chinese Room” thought experiment (in “Minds, Brains, and Programs,” 1980) highlighted the divide between mechanical processes and the subjective experience of consciousness. In Searle’s Chinese Room, a person translates Chinese by mechanically applying rules to items in a large database. The person’s success in translating Chinese follows simply from faithfully following the rules and thus requires no understanding of Chinese. This thought experiment illustrates that processing information does not equate to comprehending it.
For me personally, the most compelling thought experiment for discounting that computation is capable of consciousness is simply to consider a Turing machine. A Turing machine can represent any computation. It includes two things: (1) a tape consisting of squares filled with zeros and ones, or bits (for more than two possibilities in each square, put more than one bit per square, but keep the number of bits per square fixed); and (2) a read-write head that moves along the squares and alters or leaves unchanged the bits in each square. The read-write head alternates among a fixed number of states according to transition rules that depend on the other states and where the head is on the tape, changing or leaving unchanged the present square and then moving left or right one square.
So Here’s the Question
Where is consciousness in this reading and writing of bits? As a reductio ad absurdum of this thought experiment, I imagine a world with an unlimited number of doors. Doors can be open or closed. An unlimited number of people live in houses with these doors. Let closing a door correspond to zero, opening it to one. As these doors open and close, they could be executing an algorithm. And if humans are computers, then such an algorithm could be us. And yet, to think that the joint opening and closing of doors could, if the doors were only opened and closed in the right way, achieve consciousness, such as sharing a glass of wine with your beloved while overlooking a Venetian veranda, seems bonkers. Such thought experiments suggest a fundamental divide between the operations of a machine and the conscious understanding inherent in human intelligence.
One last thought in this vein: Neuroscientific research further complicates the picture. The brain is increasingly showing itself to be not just a complex information processor but an organ characterized by endogenous activity — spontaneous, internally driven behaviors independent of external stimuli. This perspective portrays the brain as an active seeker of information, as is intrinsic to organic systems. Such spontaneous behavior, found across all of life, from cells to entire organisms, raises doubts about the capacity of machines produce these intricate, self-directed processes.